DiscoverGold
3週前
Amazon doubles down on AI startup Anthropic with $4 billion investment
By: Investing | November 22, 2024
(Reuters) -Amazon.com pumped in an additional $4 billion into artificial intelligence startup Anthropic, as the e-commerce giant goes up against Big Tech rivals in a race to capitalize on generative AI technology.
This doubles Amazon (NASDAQ:AMZN)'s investment in the firm known for its GenAI chatbot Claude, but it remains a minority investor, the startup said on Friday. Amazon will also be Anthropic's main training partner for AI models.
Amazon, which is Anthropic's primary cloud partner, is fiercely competing with Microsoft (NASDAQ:MSFT) and Alphabet (NASDAQ:GOOGL)'s Google to offer AI-powered tools for its cloud customers.
"The investment in Anthropic is essential for Amazon to stay in a leadership position in AI," said D.A. Davidson analyst Gil Luria.
The e-commerce company's increased investment in Anthropic underscores the billions of dollars funneled into AI startups over the past year, as investors look to cash in on a boom in the technology, which became popular with the launch of OpenAI's ChatGPT in late 2022.
Microsoft-backed OpenAI raised $6.6 billion from investors last month, which could value the company at $157 billion and cement its position as one of the most valuable private companies in the world.
Anthropic plans to train and deploy its foundational models on Amazon's Trainium and Inferentia chips. The intensive process of training AI models requires powerful processors, making securing pricey AI chips a top priority for startups.
"It (partnership) also allows Amazon to promote its AI services such as leveraging its AI chips for training and inferencing, which Anthropic is using," Luria said.
Nvidia (NASDAQ:NVDA) currently dominates the market for AI processors and counts Amazon among its long list of so-called hyperscaler customers.
Still, Amazon has been working to develop its own chips through its Annapurna Labs division, which Anthropic said it was "working closely with" to aid in developing processors.
Anthropic, co-founded by former OpenAI executives and siblings Dario and Daniela Amodei, said last year it had secured a $500 million investment from Alphabet, which promised to invest another $1.5 billion over time.
The startup also uses Alphabet's Google Cloud services as part of its operations.
Read Full Story »»»
DiscoverGold
Bountiful_Harvest
1月前
Cool Beans! Big Tech group’s Annapurna Labs is spending big to build custom chips that lessen its reliance on market leader
https://arstechnica.com/ai/2024/11/amazon-ready-to-use-its-own-ai-chips-reduce-its-dependence-on-nvidia/
Amazon is poised to roll out its newest artificial intelligence chips as the Big Tech group seeks returns on its multibillion-dollar semiconductor investments and to reduce its reliance on market leader Nvidia.
Executives at Amazon’s cloud computing division are spending big on custom chips in the hopes of boosting the efficiency inside its dozens of data centers, ultimately bringing down its own costs as well as those of Amazon Web Services’ customers.
The effort is spearheaded by Annapurna Labs, an Austin-based chip start-up that Amazon acquired in early 2015 for $350 million. Annapurna’s latest work is expected to be showcased next month when Amazon announces widespread availability of ‘Trainium 2,’ part of a line of AI chips aimed at training the largest models.
Trainium 2 is already being tested by Anthropic—the OpenAI competitor that has secured $4 billion in backing from Amazon—as well as Databricks, Deutsche Telekom, and Japan’s Ricoh and Stockmark.
AWS and Annapurna’s target is to take on Nvidia, one of the world’s most valuable companies thanks to its dominance of the AI processor market.
“We want to be absolutely the best place to run Nvidia,” said Dave Brown, vice-president of compute and networking services at AWS. “But at the same time we think it’s healthy to have an alternative.” Amazon said ‘Inferentia,’ another of its lines of specialist AI chips, is already 40 percent cheaper to run for generating responses from AI models.
“The price [of cloud computing] tends to be much larger when it comes to machine learning and AI,” said Brown. “When you save 40 percent of $1,000, it’s not really going to affect your choice. But when you are saving 40 percent on tens of millions of dollars, it does.”
Amazon now expects around $75 billion in capital spending in 2024, with the majority on technology infrastructure. On the company’s latest earnings call, chief executive Andy Jassy said he expects the company will spend even more in 2025.
This represents a surge on 2023, when it spent $48.4 billion for the whole year. The biggest cloud providers, including Microsoft and Google, are all engaged in an AI spending spree that shows little sign of abating.
Amazon, Microsoft, and Meta are all big customers of Nvidia, but are also designing their own data center chips to lay the foundations for what they hope will be a wave of AI growth.
“Every one of the big cloud providers is feverishly moving towards a more verticalized and, if possible, homogenized and integrated [chip technology] stack,” said Daniel Newman at The Futurum Group.
“Everybody from OpenAI to Apple is looking to build their own chips,” noted Newman, as they seek “lower production cost, higher margins, greater availability, and more control.”
“It’s not [just] about the chip, it’s about the full system,” said Rami Sinno, Annapurna’s director of engineering and a veteran of SoftBank’s Arm and Intel.
For Amazon’s AI infrastructure, that means building everything from the ground up, from the silicon wafer to the server racks they fit into, all of it underpinned by Amazon’s proprietary software and architecture. “It’s really hard to do what we do at scale. Not too many companies can,” said Sinno.
After starting out building a security chip for AWS called Nitro, Annapurna has since developed several generations of Graviton, its Arm-based central processing units that provide a low-power alternative to the traditional server workhorses provided by Intel or AMD.
“The big advantage to AWS is their chips can use less power, and their data centers can perhaps be a little more efficient,” driving down costs, said G Dan Hutcheson, analyst at TechInsights. If Nvidia’s graphics processing units are powerful general purpose tools—in automotive terms, like a station wagon or estate car—Amazon can optimize its chips for specific tasks and services, like a compact or hatchback, he said.
So far, however, AWS and Annapurna have barely dented Nvidia’s dominance in AI infrastructure.
Nvidia logged $26.3 billion in revenue for AI data center chip sales in its second fiscal quarter of 2024. That figure is the same as Amazon announced for its entire AWS division in its own second fiscal quarter—only a relatively small fraction of which can be attributed to customers running AI workloads on Annapurna’s infrastructure, according to Hutcheson.
As for the raw performance of AWS chips compared with Nvidia’s, Amazon avoids making direct comparisons and does not submit its chips for independent performance benchmarks.
“Benchmarks are good for that initial: ‘hey, should I even consider this chip,’” said Patrick Moorhead, a chip consultant at Moor Insights & Strategy, but the real test is when they are put “in multiple racks put together as a fleet.”
Moorhead said he is confident Amazon’s claims of a 4-times performance increase between Trainium 1 and Trainium 2 are accurate, having scrutinized the company for years. But the performance figures may matter less than simply offering customers more choice.
“People appreciate all of the innovation that Nvidia brought, but nobody is comfortable with Nvidia having 90 percent market share,” he added. “This can’t last for long.”
© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.
fwb
1月前
Thanks..............Here is the story......
Big Tech group’s Annapurna Labs is spending big to build custom chips that lessen its reliance on market leader
https://arstechnica.com/ai/2024/11/amazon-ready-to-use-its-own-ai-chips-reduce-its-dependence-on-nvidia/
Amazon is poised to roll out its newest artificial intelligence chips as the Big Tech group seeks returns on its multibillion-dollar semiconductor investments and to reduce its reliance on market leader Nvidia.
Executives at Amazon’s cloud computing division are spending big on custom chips in the hopes of boosting the efficiency inside its dozens of data centers, ultimately bringing down its own costs as well as those of Amazon Web Services’ customers.
The effort is spearheaded by Annapurna Labs, an Austin-based chip start-up that Amazon acquired in early 2015 for $350 million. Annapurna’s latest work is expected to be showcased next month when Amazon announces widespread availability of ‘Trainium 2,’ part of a line of AI chips aimed at training the largest models.
Trainium 2 is already being tested by Anthropic—the OpenAI competitor that has secured $4 billion in backing from Amazon—as well as Databricks, Deutsche Telekom, and Japan’s Ricoh and Stockmark.
AWS and Annapurna’s target is to take on Nvidia, one of the world’s most valuable companies thanks to its dominance of the AI processor market.
“We want to be absolutely the best place to run Nvidia,” said Dave Brown, vice-president of compute and networking services at AWS. “But at the same time we think it’s healthy to have an alternative.” Amazon said ‘Inferentia,’ another of its lines of specialist AI chips, is already 40 percent cheaper to run for generating responses from AI models.
“The price [of cloud computing] tends to be much larger when it comes to machine learning and AI,” said Brown. “When you save 40 percent of $1,000, it’s not really going to affect your choice. But when you are saving 40 percent on tens of millions of dollars, it does.”
Amazon now expects around $75 billion in capital spending in 2024, with the majority on technology infrastructure. On the company’s latest earnings call, chief executive Andy Jassy said he expects the company will spend even more in 2025.
This represents a surge on 2023, when it spent $48.4 billion for the whole year. The biggest cloud providers, including Microsoft and Google, are all engaged in an AI spending spree that shows little sign of abating.
Amazon, Microsoft, and Meta are all big customers of Nvidia, but are also designing their own data center chips to lay the foundations for what they hope will be a wave of AI growth.
“Every one of the big cloud providers is feverishly moving towards a more verticalized and, if possible, homogenized and integrated [chip technology] stack,” said Daniel Newman at The Futurum Group.
“Everybody from OpenAI to Apple is looking to build their own chips,” noted Newman, as they seek “lower production cost, higher margins, greater availability, and more control.”
“It’s not [just] about the chip, it’s about the full system,” said Rami Sinno, Annapurna’s director of engineering and a veteran of SoftBank’s Arm and Intel.
For Amazon’s AI infrastructure, that means building everything from the ground up, from the silicon wafer to the server racks they fit into, all of it underpinned by Amazon’s proprietary software and architecture. “It’s really hard to do what we do at scale. Not too many companies can,” said Sinno.
After starting out building a security chip for AWS called Nitro, Annapurna has since developed several generations of Graviton, its Arm-based central processing units that provide a low-power alternative to the traditional server workhorses provided by Intel or AMD.
“The big advantage to AWS is their chips can use less power, and their data centers can perhaps be a little more efficient,” driving down costs, said G Dan Hutcheson, analyst at TechInsights. If Nvidia’s graphics processing units are powerful general purpose tools—in automotive terms, like a station wagon or estate car—Amazon can optimize its chips for specific tasks and services, like a compact or hatchback, he said.
So far, however, AWS and Annapurna have barely dented Nvidia’s dominance in AI infrastructure.
Nvidia logged $26.3 billion in revenue for AI data center chip sales in its second fiscal quarter of 2024. That figure is the same as Amazon announced for its entire AWS division in its own second fiscal quarter—only a relatively small fraction of which can be attributed to customers running AI workloads on Annapurna’s infrastructure, according to Hutcheson.
As for the raw performance of AWS chips compared with Nvidia’s, Amazon avoids making direct comparisons and does not submit its chips for independent performance benchmarks.
“Benchmarks are good for that initial: ‘hey, should I even consider this chip,’” said Patrick Moorhead, a chip consultant at Moor Insights & Strategy, but the real test is when they are put “in multiple racks put together as a fleet.”
Moorhead said he is confident Amazon’s claims of a 4-times performance increase between Trainium 1 and Trainium 2 are accurate, having scrutinized the company for years. But the performance figures may matter less than simply offering customers more choice.
“People appreciate all of the innovation that Nvidia brought, but nobody is comfortable with Nvidia having 90 percent market share,” he added. “This can’t last for long.”
© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.