|AI at Google: our principles|
|By Thom Holwerda on 2018-06-07 23:58:57|
Sundar Pichai has outlined the rules the company will follow when it comes to the development and application of AI.
We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.
We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.
It honestly blows my mind that we've already reached the point where we need to set rules for the development of artificial intelligence, and it blows my mind even more that we seem to have to rely on corporations self-regulating - which effectively means there are no rules at all. For now it feels like "artificial intelligence" isn't really intelligence in the sense of what humans and some other animals display, but once algorithms and computers start learning about more than jut identifying dog pictures or mimicking human voice inflections, things might snowball a lot quicker than we expect.
AI is clearly way beyond my comfort zone, and I find it very difficult to properly ascertain the risks involved. For once, I'd like society and governments to be on top of a technological development instead of discovering after the fact that we let it all go horribly wrong.
- Dr. Google is a liar - 2018-12-18
- Android Emulator picks up support for Fuchsia's Zircon kernel - 2018-12-08
- Riding in Waymo One, Google's first self-driving taxi service - 2018-12-05
- Measuring Google's "filter bubble" - 2018-12-05
- More related articles