gtag('config', 'G-0PFHD683JR');
Price Prediction

For best results with LLMS, use JSON outputs

This is the fourth part of a continuous chain. See parts 1and 2And 3.

Fourth artificial intelligence principle: Use organized instructions outputs

There was a time, long ago, a long time ago, when LLM APIS just came out and no one has yet known for sure how to interact with them properly. One of the higher problems was to extract multiple outputs from one directed response. When LLMS did not constantly repeat JSON (and they failed a lot), I tried to persuade LLM to cooperate using the best fast engineering speech.

These were ancient times. At that time, we traveled on horseback and we wrote candlelight claims, as electricity has not been invented yet. Claims of mistakes mean that the long nights he spent in staring at university degree manuscripts, in the hope that the model will restore a list instead of Haiko. And if you fail, you will have no choice but to sigh deeply, retract ink, and try again.

Well, I made this last part. But LLM programming facades that could not re -respond JSON were a real thing and caused many problems. Everything began to change with Organized outputs In November 2023 – You can now use an Openai Application program to give you the coordinated JSON. In 2024, Openai also added support for strict structured outputs, ensuring that JSON will completely return. Similar API improvements were added by man and Google. The time for fast, unblina outputs has passed, and we will never come back.

benefits

Why is it better to use organized JSON’s directive outputs instead of other formats or invented dedicated format?

Decreased error rate

Modern LLMS has been seized to remove Json Saleh upon request-it is rare to fail even with very complex responses. In addition, many platforms contain protection at the level of software versus coordinated outputs incorrectly. For example, Openai API casts an exception when other JSON is returned when it is placed in a strict organized output position.

If you use a dedicated format to return multiple output variables, you will not benefit from this control, and the error rate will be much higher. The time will be spent in re -engineering the claim and adding re -simulations.

Difficulty claims and symbols

With Json output, it is trivial to add another output field, and do so should not break the current code. This saves the addition of fields to the claim from changes to the logic of addressing code. It can provide you with a separation of time and effort, especially in cases where claims are loaded from abroad; Sees The second principle: download llm safely (if you really have to).

Simple system

Is there a practical reason for using the output format without supporting a built -in platform? It will be easier for you and later shareholders in the code to coordinate responses using JSON. Do not re -invent the wheel unless you have to do so.

When you do not use organized output

Take one field output

If the claim comes out one field in response, there are no benefits to take out JSON. Or there?

The variable variable responses today may become complex responses tomorrow. After spending hours converting a one -field removal router into many of the field’s removal requests, I now use Json virtual even when returning only one field. This saves time later with the addition of the minimum additional complexity in advance.

Even when the logic of the program does not need multiple outputs, there are engineering advantages and errors to add additional fields. The addition of a field that provides an explanation for the response (or citing a source of documents) can greatly improve rapid performance (1). It can also be recorded as an explanation of the decisions of the form. Obtaining the response is JSON from the beginning makes adding such a field much easier.

So even if the claim has a single output variable, think about the format of Json as an option.

Flow

For applications where cumin is very important, LLM end points are often used. This allows the representation of parts of the response before receiving the entire response. This style does not work well with JSON, so you should use simple format and a friend of the current instead.

For example, if your claim decides to make a video letter taken with the video and the words that the letter says, you can encrypt it as “action | Speech_ Openai Streaming API. This will give you a much better transition time.

Directing Example:

Wave_at_hero | Hello, adventure! Welcome to my store.

Once the procedure is received, the character begins to wave, and the text is removed during its flow.

Json lines Other current -friendly formats can be used effectively.

conclusion

Do not reject the benefits of civilization – use fast outputs JSON. There are barely no negative aspects and will make your life much easier as LLMS has been significantly improved to return the valid JSON responses. Think of using JSON up even if the data is currently one field. For finishing points, use JSON lines or simple dedicated format.

If you enjoyed this post, subscribe to the series for more.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button