To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start.
Tamar Ailam has worked at IBM for the past 24 years. She is currently an IBM Fellow and Principal Scientist for Sustainable Computing, helping teams reduce the amount of energy consumed by computing. The work she is most proud of is an open source project called Kepler that helps quantify the energy consumption of a single containerized application.
In many ways, she was ahead of her time. As this AI revolution progresses, energy consumption has become one of the most important topics in the industry. AI uses vast amounts of natural resources. Training and using AI are both energy-intensive. A report from Goldman Sachs this year states that a single ChatGPT search requires 10 times more power than a Google search. The report also states that AI is expected to increase data center power demand by 160% in the near future.
Eilam is working with IBM to help alleviate this.
“We need to focus on sustainability in general,” she told TechCrunch. “We have challenges, but we also have opportunities.”
energy problem
Eilam believes the industry is in trouble. She said AI has the potential to make industries more sustainable, even though the technology itself is currently a waste of resources.
In fact, computing and AI can help decarbonize the power grid, she said. Currently, the power grid relies in part on renewable energy sources such as water, solar, and wind, and these resources fluctuate in price and availability. This means that data centers that rely on them struggle to guarantee consistent (in terms of price and power) service to consumers. “By aligning the power grid with computing and shifting or reducing workloads, we can actually support decarbonization,” she said.
But her concerns go beyond natural resources. “Think about the number of chips we're making and the carbon costs and toxic materials that go into making those chips,” she said of the industry.
She says that at IBM, she keeps all of these issues top of mind and strives to approach sustainable AI holistically when finding solutions. For example, IBM is leading a program sponsored by the National Science Foundation that could identify where permanent chemicals exist in AI chips and accelerate the discovery of new materials to replace them, she said.
Operationally, we advise teams on how to train AI models in ways that save energy. “Using not just less data, but higher quality data, we converge faster to more accurate solutions,” she said.
For fine-tuning, she says IBM has speculative decoding techniques that improve inference efficiency. “Then you go down the stack,” she continued. “Since we have our own platform, we are building a lot of optimizations related to how these models are deployed on accelerators.”
IBM believes in openness and heterogeneity, the latter of which means the model is not one-size-fits-all, she says. “This is why we released Granite in several different sizes. Based on your use case, you choose the right size, which potentially lowers costs, fits your needs, and reduces energy consumption. Masu.”
It has built-in observability to quantify everything from energy consumption to latency to throughput, she said. She sees her work as increasingly important, especially as she hopes more people can trust that the IBM model provides an effective and sustainable way to compute. “What we want to say to them is, 'Hey, don't start from scratch,'” she said. “Take Granite as an example, now you tweak it. Do you know how much energy you save by not starting from scratch?” she continued.
“The reason they want to start developing their own models from scratch is because they don't trust what's out there. ,” she said. “We carry intellectual property indemnification for all of our models because we can accurately communicate the data that comes in and guarantee that there is no intellectual property infringement. So we're saying, “We can trust our model.''
AI woman
Eilam's background is in distributed cloud computing, and in 2019 she attended a software conference where she gave one of the keynotes on climate change. “After I finished the talk, I couldn’t help but think about sustainability,” she said.
So she set out to make a difference by fusing climate and computing. But as she dug deeper into AI, she was often the only woman in the room. She said she learned a lot about unconscious bias, which both men and women have in different ways. “I think a lot about raising awareness, especially as a woman in a leadership role,” she said.
She co-led an IBM research workshop a few years ago to talk to women about this kind of bias. For example, women would not apply for a job even if they were over 70% qualified, while men would. Even if it's less than 50%. She has some advice for women starting their own professional journeys. Never be afraid to have an opinion and express it.
“Hold on, hang on. If they don't listen, say it another time, another time. That's the best advice I can give you.”
What will the future hold?
Eilam believes investors should focus on startups that are transparent about their innovations.
“Are they disclosing their data sources?” He added that this also applies when companies share how much energy their AI consumes. It also says it's important for investors to look at whether startups have guardrails in place to help prevent high-risk scenarios.
She also said she is in favor of stronger regulation, although it may be difficult to do so because the technology can be very complex. However, the first step is to go back to transparency, explaining what is going on, and being honest about the impact it has.
“If explainability does not exist, we use: [AI] “If it doesn't affect people's potential futures, there's a problem here,” she said.
This piece has been updated.
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.