Agents are agents.
For complex goals, value alignment becomes a major problem. Ideally these align with human values (though these are nebulous). Paperclip-esque problems are not '"unintelligent' or "insane"; they are a logical consequence of defining winning as the sole objective for the machine.'
[John McCarthy] defined the high-level language Lisp, which was to become the dominant AI programming language for the next 30 years
Risks associated with AI:
- lethal autonomous weapons
- surveillance and persuasion
- biased decision making, intentional or otherwise
- biased data produced biased models
- impact on employment
- safety critical applications
- cybersecurity