Two small notes on the "malicious use of AI" report

After a long hiatus on this blog, a new post! Well, not really - but a whitepaper was published today titled "The Malicious Use of Artificial Intelligence", and I decided I should cut/paste/publish two notes that apply to the paper from an email I wrote a while ago.

Perhaps they are useful to someone:
1) On the ill-definedness of AI: AI is a diffuse and ill-defined term. Pretty much *anything* where a parameter is inferred from data is called "AI" today. Yes, clothing sizes are determined by "AI", because mean measurements are inferred from real data.

To test whether one has fallen into the trap as viewing AI as something structurally different from other mathematics or computer science (it is not!), one should try to battle-test documents about AI policy, and check them for proportionality, by doing the following:

Take the existing test and search/replace every occurrence of the word "AI" or "artificial intelligence" with "Mathematics", and every occurrence of the word "machine learning" with "statistics". Re-read the text and see whether you would still agree.

2) "All science is always dual-use":

Everybody that works at the intersection of science & policy should read Hardy's "A mathematicians apology". https://www.math.ualberta.ca/mss/misc/A%20Mathematician%27s%20Apology.pdf

I am not sure how many of the contributors have done so, but it is a fascinating read - he contemplates among other things the effect that mathematics had on warfare, and to what extent science can be conducted if one has to assume it will be used for nefarious purposes.

My favorite section is the following:
We have still one more question to consider. We have concluded that the trivial mathematics is, on the whole, useful, and that the real mathematics, on the whole, is not; that the trivial mathematics does, and the real mathematics does not, ‘do good’ in a certain sense; but we have still to ask whether either sort of mathematics does harm. It would be paradoxical to suggest that mathematics of any sort does much harm in time of peace, so that we are driven to the consideration of the effects of mathematics on war. It is every difficult to argue such questions at all dispassionately now, and I should have preferred to avoid them; but some sort of discussion seems inevitable. Fortunately, it need not be a long one.
There is one comforting conclusions which is easy for a real mathematician. Real mathematics has no effects on war.
No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems very unlikely that anyone will do so for many years. It is true that there are branches of applied mathematics, such as ballistics and aerodynamics, which have been developed deliberately for war and demand a quite elaborate technique: it is perhaps hard to call them ‘trivial’, but none of them has any claim to rank as ‘real’. They are indeed repulsively ugly and intolerably dull; even Littlewood could not make ballistics respectable, and if he could not who can? So a real mathematician has his conscience clear; there is nothing to be set against any value his work may have; mathematics is, as I said at Oxford, a ‘harmless and innocent’ occupation. The trivial mathematics, on the other hand, has many applications in war.
The gunnery experts and aeroplane designers, for example, could not do their work without it. And the general effect of these applications is plain: mathematics facilitates (if not so obviously as physics or chemistry) modern, scientific, ‘total’ war.

The most fascinating bit about the above is how fantastically presciently wrong Hardy was when speaking about the lack of war-like applications for number theory or relativity - RSA and nuclear weapons respectively. In a similar vein - I was in a relationship in the past with a woman who was a social anthropologist, and who often mocked my field of expertise for being close to the military funding agencies (this was in the early 2000s). The first thing that SecDef Gates did when he took his position was hire a bunch of social anthropologists to help DoD unravel the tribal structure in Iraq.

The point of this disgression is: It is impossible for any scientist to imagine future uses and abuses of his scientific work. You cannot choose to work on "safe" or "unsafe" science - the only choice you have is between relevant and irrelevant, and the militaries of this world *will* use whatever is relevant and use it to maximize their warfare capabilities.

Article Link: ADD / XOR / ROL: Two small notes on the "malicious use of AI" report