Probably the latter question is an obvious bias based on my own media consumption, but even when trying my best internet-search efforts, I have a hard time finding anything interesting about GPT-4 (a name people seem to use for a new generation of LLMs following GPT-3). Obviously this is simply a result from openAI not releasing any new information making news useless.
Most of openAIs public affairs with regards to the LLMs they build seems to be focused on GPT-3 series models, in particular fine-tuned ones. That is not directly surprising, as these fine-tuned models are a great source of income for openAI. However, given their past release rate of GPT series (GPT in 2018, GPT-2 in 2019, and GPT-3 in 2020), they seem to take quite some time with their next series (it is almost 2023?). This raises two intuitive thoughts (both of which are probably by far to simple to be even close to reality): Either openAI is somewhat stuck and has a hard time keeping up with its past pace in making "game-changing" progress with their LLM work or openAI has made very extreme progress in the last years and decided to not publicise it for strategic reasons (e.g. to prevent from increasing the "race to AGI")
Any thoughts or pointers on that?
"Become silent about it"
It's good and valid you write this, but this is very likely different from the truth. For one thing, in multiple senses/channels, OpenAI isn't totally locked down, and extreme sorts of developments would come out. Secondly, I think we've been truthfully told what they are working on (in the other comments), e.g. training refinements/sophistication approach to LLM.
Also, as a meta comment, in my opinion, it seems possible the beliefs that formed this valid question, may come from an information environment that seems not identical to an ideal information environment that would optimally guide future decisions related to AI or AI safety.
Ok, writing quickly. Starting on the "object level about the beliefs":
- It seems like sentiment or buzz, like the tweets about GPT-4 mentioned in the other comment can be found. That gives a different view than silence mentioned in your post. It seems it could be found by searching twitter or other social media.
- It seems like the content in my comment (e.g. I've suggested that there are various projects that OpenAI has under way and these compete for attention/PR effort) is sort of publicly apparent, low hanging speculation.
- Let's say that OpenAI was ac
... (read more)