0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Terminus

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!