- rebecca4670
Keeping Quantum AI Ethical
Each day on the Internet we produce the equivalent of 250,000 Libraries of Congress worth of data. While Moore's law has held true since the 1960s, on our limited classical computers, We have trouble making sense of even a tiny fraction of all this data. Google currently claims to have a quantum computer that is 100 million times faster than any of today's computers. That comes with a handful of caveats of course, Which is why we're not all racing to buy a Google quantum computer at our local big-box store. But the promise of quantum computing tells us thatWe will soon be able to search throughIncredibly large unsorted sets of dataAnd uncover patterns that were previously invisible to us. It follows that quantum computers will grow closer and closer to giving us general artificial intelligence - the stuff that we've been promised in science fiction books. Well this may still be decades away, we have a responsibility now to consider the impact of what we are laying the foundations for.
Keeping AI Under Control
Until now in machine learning and AI we've been working against the limitation of our computing power. It can sometimes take days or even months to train and fine-tune a model even on super- fast GPU powered machines. In theory, Quantum computing will at some point eliminate these resource barriers And will be able to build applications that will be able to quickly understand and make sense of massive amounts of data. This paves the way for some point creating AI that can understand and make decisions about our world more efficiently than we can.
So how do we avoid creating evil robot overlords? One way is to standardizeAn AI moral code - A set of rulesTo be programmed into anyDecision-making AI. Isaac Asimov gave us the three laws of robotics,But will need something a little more nuanced than that to address what it means to be compassionate and self-regulating. I would personally like to see more collaboration between quantum physicists and machine learning researchers, with experts from other diverse fields such as psychology, mindfulness research, and anthropology.
Making the Data Fair and Representative
Something I've been very interested in over lastCouple of years is Bias in our data sets. When I say bias, I don't mean statistical bias that interferes with us getting accurate results. I mean prejudice. Let me give you an example of how prejudice in Machine Learning datasets has had real-world consequences:
In 2016 Propublica released a report on the Northpointe risk assessment software that has been used in court rooms across the US to help decide whether or not someone is likely to be innocent or guilty, and what types of punishment to mete out. They found significant built- in racial bias leading to higher risk scores for non-white defendants. This prejudice was not intentionally programmed into the software. It wasn't a problem of malicious programmers. The prediction models were trained based on historical data. The software only exposed historical prejudice in the courtroom.
Because more and more were using techniques in machine learning that obscure the internal pattern recognition and decision-making processes from us, We need to start being more and more careful about how we are teaching them to make decisions. Most of us try to be better than our parents were in raising our own children. We need to be better than our past selves in training the first generation of intelligent AI, and the dawn of quantum computing only makes this more imperative.