gtag('config', 'G-0PFHD683JR');
Crypto Trends

The new Llama 3.1 model from Nvidia New Chraining Meta’s Meta’s Llama 3.1 in only 27 minutes

The new BlackWell chips in NVIDIA change the speed of the training of artificial intelligence systems.

In the last round of the results of the measurement issued on Wednesday by MLCOCOMONS, a non -profit group that tracks and compares the capabilities Artificial intelligence chipsBlack Cell architecture, programming NVIDIA Group records.

When tested using the Lama 3.1 405B source, one of the largest and most complicated artificial intelligence models, the training has been completed Only 27 minutes Using Blackwell chips. This was done with 2496 Blackwell graphics processing units, which is less than what he would have taken with previous fold chips in NVIDIA.

On the other hand, the previous designs are used for three times the number Hopper graphics processing units To provide an equivalent performance. By the slide, Blackwell was more than twice the speed, which was a great leap in the efficiency of rapprochement. This type of performance batch can be translated into great cost savings and costs for institutions that train trilliona.

These results are believed to be the first mlcommons Standards for these extremist measures training forms and provide a real measurement on how to deal with chips well with the burden of the most demanding artificial intelligence.

Coreweave, Nvidia Drive Smarter Ai Scaling

The results were not only winning NVIDIA, but also highlighted the work of Coreweave, a cloud infrastructure company that had a partnership in the tests. At a press conference, Coreweave’s chief product employee indicated to Chitan Kapoor to a general trend that was increasingly logical in this industry: away from large homogeneous masses of tens of thousands of graphics processing units.

Instead of building a single, huge, homogeneous computing system, companies are now looking into smaller sub -groups and interdependence that can manage huge typical training more efficiently and with better scaling.

Kapoor said that with such a technique, developers can continue to expand or cut the time needed to train very large models with trillions of parameters.

The transition to the standard publication of devices is also necessary because the size and complexity of artificial intelligence models are only enlarged.

Blackwell puts Nvidia in the forefront for training on the artificial intelligence model

Although the focus has recently turned into the inference of artificial intelligence, as models such as Chatgpt1 answer user questions in actual time, training is still working on developing artificial intelligence.

The training part of these models gives their intelligence, allowing them to understand the language, address some of the most challenging problems, and even the production of human -like prose. The account is very required and requires thousands of high -performance chips for long periods of work, usually days, if not weeks or months.

This has changed with architecture in NVIDIA. By radically cutting the chips and the time it takes to training AI Gargantuan models, Blackwele chips give NVIDIA a better hand in the market that governs speed and efficiency.

Training models such as Meta’s Llama 3.1 405b, which have trillion of parameters, should have been operated in huge collections of graphics processing units and was a dominant energy process.

These performance gains are a large leg at a time when there is a severe demand for the largest and most powerful artificial intelligence models in many industries – from health care and financing to education and independent vehicles.

He also sends a clear message to NVIDIA competitors. Now, chips companies such as AMD and Intel, which work on their prosecution chips, are exposed to greater pressure to maintain a similar pace.

AMD was presented to the standard MLCOMMONS test, but it did not show results of a large model like Llamas 3.1 405b. Nvidia was the only one that was tested at the end of the index, proving that it was the superior devices and ready to face the most difficult challenges.

Cryptopolitan Academy: Do you want to develop your money in 2025? Learn how to do this with Defi on our next electronic performance. Keep your place

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button