GPT-4 is out. There's also a LessWrong post on this with some a lot of discussion. The developers are doing a live-stream ~now (yesterday).
And it's been confirmed that Bing runs on GPT-4.
Also:
Here's an image from the OpenAI blog post about GPT-4:
(This is a short post.)
Lizka - thanks for sharing this.
I'm struck by one big 'human subjects' issue with the ethics of OpenAI and deployment of new GPT versions: there seems to be no formal 'human subjects' oversight of this massive behavioral experiment, even though it is gathering interactive, detailed, personal data from over 100 million users, with the goal of creating generalizable knowledge (in the form of deep learning parameters, ML insights, & human factors insights).
As an academic working in an American university, if I wanted to run a behavioral sciences experiment on as few as 10 or 100 subjects, and gather generalizable information about their behavior, I'd need to get formal Institutional Review Board (IRB) approval to do that, through a well-established system of independent review that weights scientific and social benefits of the research against the risks and costs for participants and for society.
On the other hand, OpenAI (and other US-based AI companies) seem to think it's perfectly fine to gather interactive, detailed, identified (non-anonymous) data from over 100 million users, without any oversight. Insofar as they've ever received any federal research money (e.g. from NSF or DARPA), this could arguably be a violation of federal code 45 CFR 46 regarding protection of human subjects.
The human subjects issues might be exacerbated by the fact that GPT users are often sharing private biomedical information (e.g. asking questions about specific diseases, health concerns, or test results they have), and it's not clear whether OpenAI has the systems in place to adequately protect this private health information, as mandated under the HIPAA rules.
It's interesting that the OpenAI 'system card' on GPT-4 lists many potential safety issues, but seems not to mention these human subjects/IRB compliance issues at all, as far as I can see.
For example, there is no real 'informed consent' process for people signing up to use Chat GPT. An honest consent procedure would include potential users reading some pretty serious cautions such as 'The data you provide will help OpenAI develop more powerful AI systems that could make your job obsolete, that could be used to develop mass customized propaganda, that could exacerbate economic inequality, and that could impose existential risks on our entire species. If you agree to these terms, please click 'I agree'....
So, we're in a situation where OpenAI is running one of the largest-scale behavioral experiments ever conducted on our species, collecting gigabytes of personal information from users around the world, with the goal of distilling this information into generalizable knowledge, but seems to be entirely ignoring the human subjects protection regulations mandated by the US federal government.
EA includes a lot of experts on moral philosophy and moral psychology. Even setting aside the US federal regulatory issues, I wonder what you all think about the research ethics of GPT deployment to the general public, without any informed consent or debriefing??