Pal Hargitai has already explained it well.
It seems a bit like educating.You can give your children a certain level of ethical rules.
AI fed by a fascist will use other ethical rules than AI fed by a pacifist Christian.
This is an interesting topic because it has been critically looked at in recent times.This is under the assumption of the current state of affairs of Artificial Intelligence, predictions based on existing datasets. At the moment, it seems that this ethics is largely determined by the datasets and by the developers themselves.
This proved a time ago (AI programs exhibit racial and gender biases, research reveals) that artificial intelligence can be sexistic and racist.Given, for example, that it is unfiltered by random data on the Internet.
Unfortunately I am not able to find the article at the time.It was about a group at a large tech company where software developers themselves determined what the ethics of the artificial intelligence is.
The artificial intelligence somewhat biased on a number of subjects and made on that basis also choices.Even in justice for example:
In this talk, Hannah Fry sets an example whereby a young man is sentenced to the highest prison sentence for a wrongdoing with his underage girlfriend.Even though this was with mutual consent. In the analysis of the algorithm it became clear how unjustifiably this actually was.
What many do not know is that this is about mathematics.Have you ever looked at the polls in the newspapers? That is also mathematics. Statistics.
As a civil engineer, Master of Applied Sciences, we are killed with mathematics in the training.In the statistics lesson, you will get in the first lesson (!), that a sample, a smaller amount than the whole population you are investigating, can only be representative if it reflects the population. To give an example, if a poll is done via telephone interviews, you have to take into account those who are willing to answer questions. People with little time, little patience or people who do not answer the phone, determine who participates and who does not. For a newspaper everything must be cheap so the probability that a poll is carried out reliably is virtually nil. There is a very well-known example of a statistical study in which WW II looked at the bullet impacts in the aircraft in order to determine where the best protection was made to ensure that more aircraft would return. The mistake that was initially made was that one looked at the bullet impacts AVN returning planes so that the wrong conclusion was drawn. It turned out that a bullet strike at the site of the engines was crucial, if I am not mistaken, because these planes almost never returned and thus did not scored high in the count of planes that returned. I can’t find the story directly, but the point IWS is that EVERYTHING stands or falls with the assumptions and the input data.
Let’s take a look at openAI, a vehicle that is not at all clear to me why they are working to perfect a model that can complement or write stories.A few months ago the alarm bell was pulled by openAI that they had a model that could write so good fake news that they did not want to make it public. Well tired, everyone noks of course. AI is great dangerous. No cat that actually went to look in the paper that was in the article, no, that was asking too much for the average reader. It boils down to the fact that they were the dates on which they relied on the internet 芒 鈧?艙geschraapt芒 鈧? I am on the English Quora a little deeper, but if you take stories about Miley Cyrus and LOTR as input (Reddit), there is 芒 鈧?艙Garbage in, Garbage out芒 鈧? Everything written about Miley Cyrus is either marketing BS from her desk or gossip from Jan and Alleman. Or Brexit, in which both camps argue arguments that do not always contradict each other. Here too, there is no clear answer to the question, is the pro-Brexit camp right or the contra-Brexit camp? Ultimately, the algorithm, the model they had carefully trained with Salix from the Internet, was very good at producing Salix on the internet. And I cannot stress enough HOW DANGEROUS this model is. All bloggers, journalists and commentary writers can now replace this algorithm. Ai. Three yard, AI (or is it AI?).
AI or KI is an algortime.It calculates probabilities. And there it is very good at it. But it is the one that chooses the algorithm, applies it and determines the desired result that also selects the training data. By far-reaching ignorance and through the far-reaching human connection to Terminator (I am also a fan, hear) stories, the ethics of AI is the subject of intense debate. It must distract and frighten attention. Facebook, Amazon, Google, Apple, Microsoft all have hardware (chips) teams now, while they have nothing or virtually nothing to do with hardware. Facebook is a software company, absolutely no hardware company. So why are they going for AI hardware now? Well, now they can create data sets and train models to offer more and better advertising to the users without any eye-taking eyes. They still think that if they look for something, all the results will be shown. While it’s Facebook that chooses what you see. And especially what you never get to see. Google search results? How much information does that still yield? Each word is associated with something that can be sold, previously you could look up and find anything. LMGTFY, let me google that for you, is an understanding that one is fully exploited to deliver your targeted information that has the highest chance of coming to a purchase. Whether or not ethical use of mathematics or software (AI) is still an exclusive privilege of man that determines input, algorithm and purpose. Let us agree on ethics and human beings.
Thanks, Quora user, for the question.