From the paper:
> In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents (Fig. 1). This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in the organism-specific LD50 model, which comprises mainly pesticides, environmental toxins and drugs (Fig. 1). By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.
Someone somewhere is now generating new psychoactive drug candidates.
This is also currently being discussed here: https://news.ycombinator.com/item?id=30698803
(Top of the front page, 56 comments.)
...heh, can't believe it we're not yet at "Siri, use the TensorFlow 4.6 OmegaFold model to design potentially stable variants of COVID-25 with active BSE prionic inserts. ...DONE. Great, now place an order of the best generated sequence to the cheapest gene synthesis provider in <unregulared country x>. Air-mail delivery."
This particular issue, toxins, doesn't seem like a huge problem in practice.
You can already buy extremely potent toxins that also work against humans at the hardware store (some pesticides/insecticides/rat poisons).
New toxins might be interesting for covert operations, for example killing someone without being detected by a toxicological report, but this seem to have limited usage.
I believe we are heading towards personal safety bubbles, where dangerous molecular detection and warning is done in realtime via smartphone or other personal device such as glasses or implant. It might take awhile coming, but that seems the natural conclusion to having supercomputers available for individuals. Would the detection be possible via advanced mmwave/field detection, or would it have to have a molecular mini lab system of some sort ala pcr? Great market opportunity here.
This seems inevitable because the rise of various threats is also inevitable via technology, and us humans like to be safe.
Honestly, it sounded too over-dramatic and did not offer meaningful actionable consequences. The actual data is thin and not surprising at all, and it's also unclear how difficult it would be to design such harmful compounds without AI. I thought any capable chemist could easily come up with hundreds of harmful substances, old and new. I guess the AI is removing the chemist from the equation?
It is trivial that you can flip a sign on any network optimizing for something good and now it’s optimizing for something bad…
But it is still interesting that the network generalized to these toxic nerve agents without having them in the training set
AI discovers drugs to make researchers better at AI discovery
Use of these drugs would go against the Geneva Protocol.
Quite an interesting paper in it's impact study, but the questions it asks all boil down to "the masses are not to be trusted with our vaunted knowledge, because harm can be done. How can we prevent them from moving outside the boundaries of knowledge we like" and then the example used is how GPT-3 is managed. This is without a doubt a terrific example of bad technological stewardship: Any malicious state actor gets to use what you make for evil, while at the same time keeping it away from the folks you're making it for. I wish they had had a more substantial and innovative thing to say about it, but as it stands it reads as regressive.