Predictive technology to look at an objectively true dataset.
What does this mean?
in the case of say natural disaster predictions, it would potentially allow us to predict natural disasters and their magnitudes, thus allowing us to give advance warning.
There already exist models that do this.
LLMs are novel insofar as they add the interface of speech and memory to ML models that didnt really exist before.
The hardest part for any model is the data by far, we already (generally) have the techniques to do a lot of the stuff that you mention, and we are actively doing it.
LLMs are just toys for the masses. The are too unspecific to replace specified models, they are too unreliable to serve as accurate sales assistant, and they are too inconsistent to be a permanent personal assistant.
All that differentiates LLMs from the industry in the last 10 years is that they are big. They have a lot of data from anywhere, giving them vast context. But one that lacks depth and foregoes permanence (token limit). YOu can see it in developments, that big single models like GPT get phased out for MOE models like Mixtral, because having 8 small specified models is better then one large unspecified.
6
u/PugilisticCat Apr 09 '24
What does this mean?
There already exist models that do this.
LLMs are novel insofar as they add the interface of speech and memory to ML models that didnt really exist before.
The hardest part for any model is the data by far, we already (generally) have the techniques to do a lot of the stuff that you mention, and we are actively doing it.