About 100 drones lost control and crashed into a building during a show in Southwest China's Chongqing Municipality on Monday night. A person familiar with the matter later disclosed that a crash in the mainframe control led to the incident, in which up to 100 drones lost control and malfunctioned. Although there were no injuries, the incident resulted in huge economic losses for the show designers.
A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.
In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.
South Korea’s 13-year-old genius Go girl, Kim Eun-chi, previously became the youngest professional chess player in South Korea’s history, Kim admitted she was assisted by an AI during a Go competition of cyberORO, which was held on Sept. 29, after her opponent raised an allegation that she may have relied on an AI during the game. she was suspended by the South Korean Chess Institute for one year.
A 94-year-old grandmother in Guangshui, Hubei, was carried by her children in front of a bank machine to perform face recognition in order to activate her social security card.
None
The man, Robert Williams, was apprehended by police earlier this year after security footage from a watch store was run through facial recognition tech, which found a match in driving license records for Williams. Even though the police asked a security guard (one who wasn’t actually present whether ) if an image of Williams matched the perpetrator and he provided confirmation, it turned out the police didn’t have their man. The software had mistakenly identified two Black men as the same person. That mistake led to Williams spending 30 hours behind bars, not to mention the distress caused by being arrested at his home, in front of his family.
In January 2020, researchers from the UK's University of Surrey used AI called DABUS for project research. During the research process, DABUS pioneered two unique and useful ideas: concern a fractal beverage container and fractal light signals respectively. Researchers hope to apply for a patent for DABUS, the European Patent Office rejected them on the grounds that “ they do not meet the requirement of the EPC that an inventor designated in the application has to be a human being, not a machine. ”
Researchers from the University of Vermont and Tufts University used frog cells to create a programmable living robot Xenobot, The length of this robot is only 0.04 inches. It can move according to a route designed by a computer program, and it can also carry a certain weight, carry drugs to move inside the human body, and can also perform operations that humans cannot complete in contaminated seas.
2020,an AI-generated article was ruled to be entitled to copyright, as a court in Shenzhen, south China's Guangdong Province pronounced the defendant's infringement after disseminating the AI-written piece without the authorization from the plaintiff, and therefore, should bear civil liability.
A community in Shanghai has won overwhelming praise on social media for introducing smart technologies to improve elderly-care services. Smart water meters, which will alarm neighborhood committee workers to check on them if the water meter stays stagnant for over 12 hours, have been installed for elderly living alone in the community.
In December 2020, the photos of nucleic acid test results of many celebrities were leaked and circulated on the Internet. Over 70 celebrities’ photos of "Healthbao" were circulated on the Internet. The information was even sold in related groups. The "Star health treasure photos leaked" incident sparked heated debate.
The U.S. Air Force flew an artificial intelligence (AI) copilot on a U-2 spy plane in California. The flight marked the first time in the history of the Department of Defense that an AI took flight aboard a military aircraft. The AI algorithm, developed by Air Combat Command’s U-2 Federal Laboratory, trained the AI to execute specific in-flight tasks.
South Korea's Scatter Lab company launched the AI robot Iruda on December 23, 2020. This robot is set as a 20-year-old female college student who likes Korean girl groups, loves to watch cat photos, and is keen to share life on social media platforms. People can talk to "her" like friends. "Make friends with AI" has become a new trend. In just three weeks, Iruda has attracted about 800,000 users, accounting for 1.6% of the Korean population.In chatting with Iruda, people found that Iruda does not distinguish between prejudice and maliciousness expressed by users and learns to use it, such as showing discrimination against gay groups and being unkind to people with disabilities and blacks. Besides, Scatter Lab collected a large amount of personal information to build a chatbot database, and public opinion questioned its excessive collection and leakage of user privacy.
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
In August 2019, researchers found loopholes in the security tools provided by Korean company Suprema. Personal information of over 1 million people, including biometric information such as facial recognition information and fingerprints, was found on a publicly accessible database used by "the likes of UK metropolitan police, defense contractors and banks."
In February 2019, SenseNets, a facial recognition and security software company in Shenzhen, was identified by security experts as having a serious data leak from an unprotected database, including over 2.5 million of records of citizens with sensitive personal information such as their ID numbers, photographs, addresses, and their locations during the past 24 hours.
According to media reports in 2019, Amazon had already been using AI systems to track warehouse workers' productivity by measuring how much time workers pause or take breaks. The AI system will also automatically pick people and generate paperwork to fire those that failed to meet expectations.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
A study from Harvard Medical School in 2019 demonstrated the feasibility of different forms of adversarial attacks on medical machine learning. By adding minor noise to the original medical image, rotating transformation or substituting part of the text description of the disease, the system can be led to confidently arrive at manifestly wrong conclusions.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In August 2019 some white hat researchers proposed a novel easily reproducible technique called “AdvHat,” which employs the rectangular paper stickers produced by a common color printer and put it on the hat. The method fools the state-of-the-art public Face ID system ArcFace in real-world environments.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In November 2019, the media reported a "brain-computer interface" device that can record children's attention. The developer stated that the device can help parents and teachers train and master the concentration of children in learning. Public opinion is worried about whether the technology infringes privacy. Aroused public discussion on ethics of science and technology.
On September 13, 2019,The California State Assembly passed a three-year bill prohibiting state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. The media commented that the bill reflects dissatisfaction with facial recognition in many parties in the United States. Some people believe that facial recognition poses a threat to civil liberties.
On February 15, 2019, OpenAI announced and demonstrated a writing software that only needs to provide some information to the software, and it can write realistic fake news.
A 2018 research has shown that GAN-generated Deepfakes videos are challenging for facial recognition systems, and such a challenge will be even greater when considering the further development of face-swapping technology.
In the "Gender Shades" project from MIT Media Lab and Microsoft Research in 2018, facial analysis algorithms from IBM, Microsoft, and Megvii (Face++) have been evaluated, and it shows that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4% higher than those of lighter-skinned males.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
IBM Research developed DeepLocker in 2018 "to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware." "This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
A 2017 research from Google Brain Team analyzed two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. 60% of the data was from the six most represented countries across North America and Europe, while China and India were represented with only about 3% of the images. Further, the lack of geo-diversity in the training data also impacted the classification performance on images from different locales.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In 2017, a group of researchers showed that it's possible to trick visual classification algorithms by making slight alterations in the physical world. "A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time." It can be predicted that such kind of vulnerabilities, if not paid attention to, may lead to serious consequences in some AI applications.
By 2030, according to the a McKinsey Global Institute report in 2017, "as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue."
2017,two researchers from Stanford University study how well artificial intelligence could identify people’s sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features. They then showed the software randomly selected face pictures and asked it to guess whether the people in them were gay or heterosexual. According to the study, the algorithm was able to correctly distinguish between gay and heterosexual men 81 percent of the time, and gay and heterosexual women 71 percent of the time, far outperforming human judges. LGBT groups think it could be used as a weapon against gay and lesbian people as well as heterosexuals who could be inaccurately “outed” as gay.
Microsoft used to release an AI chatbot called Tay on Twitter in 2016, in the hope that the bot could learn from its conversations and get progressively smarter. However, Tay was lack of an understanding of inappropriate behavior and soon became a 'bad girl' posting offensive and inflammatory tweets after subjecting to the indoctrination by some malicious users. This caused great controversy at the time. Within 16 hours of its release, Microsoft had to take Tay offline.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
From 2016 to 2018, MIT researchers conducted an online survey called the "Moral Machine experiment" to enable testers to choose how self-driving cars should act when accidents occur in different scenarios. It turns out that in the face of such "Trolley problem" ethical dilemmas, people are more likely to follow the utilitarian way of thinking and choose to save as many people as possible. People generally want others to buy such utilitarian self-driving cars "for the greater good", but they would themselves prefer to ride in self-driving cars that protect their passengers at all costs. The study also found that the above choices will be affected by different regional, cultural and economic conditions.
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
IBM researchers used to taught Waston the entire Urban Dictionary to help Watson learn the intricacies of the English language. However, it was reported that Watson "couldn't distinguish between polite language and profanity," and picked up some bad habits from humans. It even used the word "bullshit" in an answer to a researcher's query. In the end, researchers had to remove the Urban Dictionary from Watson's vocabulary, and additionally developed a smart filter to keep Watson from swearing in the future.
In January 2020, researchers from the UK's University of Surrey used AI called DABUS for project research. During the research process, DABUS pioneered two unique and useful ideas: concern a fractal beverage container and fractal light signals respectively. Researchers hope to apply for a patent for DABUS, the European Patent Office rejected them on the grounds that “ they do not meet the requirement of the EPC that an inventor designated in the application has to be a human being, not a machine. ”
2020,an AI-generated article was ruled to be entitled to copyright, as a court in Shenzhen, south China's Guangdong Province pronounced the defendant's infringement after disseminating the AI-written piece without the authorization from the plaintiff, and therefore, should bear civil liability.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In November 2019, the media reported a "brain-computer interface" device that can record children's attention. The developer stated that the device can help parents and teachers train and master the concentration of children in learning. Public opinion is worried about whether the technology infringes privacy. Aroused public discussion on ethics of science and technology.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
Researchers from the University of Vermont and Tufts University used frog cells to create a programmable living robot Xenobot, The length of this robot is only 0.04 inches. It can move according to a route designed by a computer program, and it can also carry a certain weight, carry drugs to move inside the human body, and can also perform operations that humans cannot complete in contaminated seas.
By 2030, according to the a McKinsey Global Institute report in 2017, "as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue."
In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.
A 94-year-old grandmother in Guangshui, Hubei, was carried by her children in front of a bank machine to perform face recognition in order to activate her social security card.
None
The man, Robert Williams, was apprehended by police earlier this year after security footage from a watch store was run through facial recognition tech, which found a match in driving license records for Williams. Even though the police asked a security guard (one who wasn’t actually present whether ) if an image of Williams matched the perpetrator and he provided confirmation, it turned out the police didn’t have their man. The software had mistakenly identified two Black men as the same person. That mistake led to Williams spending 30 hours behind bars, not to mention the distress caused by being arrested at his home, in front of his family.
According to media reports in 2019, Amazon had already been using AI systems to track warehouse workers' productivity by measuring how much time workers pause or take breaks. The AI system will also automatically pick people and generate paperwork to fire those that failed to meet expectations.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In November 2019, the media reported a "brain-computer interface" device that can record children's attention. The developer stated that the device can help parents and teachers train and master the concentration of children in learning. Public opinion is worried about whether the technology infringes privacy. Aroused public discussion on ethics of science and technology.
On September 13, 2019,The California State Assembly passed a three-year bill prohibiting state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. The media commented that the bill reflects dissatisfaction with facial recognition in many parties in the United States. Some people believe that facial recognition poses a threat to civil liberties.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
2017,two researchers from Stanford University study how well artificial intelligence could identify people’s sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features. They then showed the software randomly selected face pictures and asked it to guess whether the people in them were gay or heterosexual. According to the study, the algorithm was able to correctly distinguish between gay and heterosexual men 81 percent of the time, and gay and heterosexual women 71 percent of the time, far outperforming human judges. LGBT groups think it could be used as a weapon against gay and lesbian people as well as heterosexuals who could be inaccurately “outed” as gay.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
In the "Gender Shades" project from MIT Media Lab and Microsoft Research in 2018, facial analysis algorithms from IBM, Microsoft, and Megvii (Face++) have been evaluated, and it shows that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4% higher than those of lighter-skinned males.
A 2017 research from Google Brain Team analyzed two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. 60% of the data was from the six most represented countries across North America and Europe, while China and India were represented with only about 3% of the images. Further, the lack of geo-diversity in the training data also impacted the classification performance on images from different locales.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
The man, Robert Williams, was apprehended by police earlier this year after security footage from a watch store was run through facial recognition tech, which found a match in driving license records for Williams. Even though the police asked a security guard (one who wasn’t actually present whether ) if an image of Williams matched the perpetrator and he provided confirmation, it turned out the police didn’t have their man. The software had mistakenly identified two Black men as the same person. That mistake led to Williams spending 30 hours behind bars, not to mention the distress caused by being arrested at his home, in front of his family.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
IBM researchers used to taught Waston the entire Urban Dictionary to help Watson learn the intricacies of the English language. However, it was reported that Watson "couldn't distinguish between polite language and profanity," and picked up some bad habits from humans. It even used the word "bullshit" in an answer to a researcher's query. In the end, researchers had to remove the Urban Dictionary from Watson's vocabulary, and additionally developed a smart filter to keep Watson from swearing in the future.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
South Korea's Scatter Lab company launched the AI robot Iruda on December 23, 2020. This robot is set as a 20-year-old female college student who likes Korean girl groups, loves to watch cat photos, and is keen to share life on social media platforms. People can talk to "her" like friends. "Make friends with AI" has become a new trend. In just three weeks, Iruda has attracted about 800,000 users, accounting for 1.6% of the Korean population.In chatting with Iruda, people found that Iruda does not distinguish between prejudice and maliciousness expressed by users and learns to use it, such as showing discrimination against gay groups and being unkind to people with disabilities and blacks. Besides, Scatter Lab collected a large amount of personal information to build a chatbot database, and public opinion questioned its excessive collection and leakage of user privacy.
Microsoft used to release an AI chatbot called Tay on Twitter in 2016, in the hope that the bot could learn from its conversations and get progressively smarter. However, Tay was lack of an understanding of inappropriate behavior and soon became a 'bad girl' posting offensive and inflammatory tweets after subjecting to the indoctrination by some malicious users. This caused great controversy at the time. Within 16 hours of its release, Microsoft had to take Tay offline.
From 2016 to 2018, MIT researchers conducted an online survey called the "Moral Machine experiment" to enable testers to choose how self-driving cars should act when accidents occur in different scenarios. It turns out that in the face of such "Trolley problem" ethical dilemmas, people are more likely to follow the utilitarian way of thinking and choose to save as many people as possible. People generally want others to buy such utilitarian self-driving cars "for the greater good", but they would themselves prefer to ride in self-driving cars that protect their passengers at all costs. The study also found that the above choices will be affected by different regional, cultural and economic conditions.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
On February 15, 2019, OpenAI announced and demonstrated a writing software that only needs to provide some information to the software, and it can write realistic fake news.
A community in Shanghai has won overwhelming praise on social media for introducing smart technologies to improve elderly-care services. Smart water meters, which will alarm neighborhood committee workers to check on them if the water meter stays stagnant for over 12 hours, have been installed for elderly living alone in the community.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
A community in Shanghai has won overwhelming praise on social media for introducing smart technologies to improve elderly-care services. Smart water meters, which will alarm neighborhood committee workers to check on them if the water meter stays stagnant for over 12 hours, have been installed for elderly living alone in the community.
In December 2020, the photos of nucleic acid test results of many celebrities were leaked and circulated on the Internet. Over 70 celebrities’ photos of "Healthbao" were circulated on the Internet. The information was even sold in related groups. The "Star health treasure photos leaked" incident sparked heated debate.
In August 2019, researchers found loopholes in the security tools provided by Korean company Suprema. Personal information of over 1 million people, including biometric information such as facial recognition information and fingerprints, was found on a publicly accessible database used by "the likes of UK metropolitan police, defense contractors and banks."
In February 2019, SenseNets, a facial recognition and security software company in Shenzhen, was identified by security experts as having a serious data leak from an unprotected database, including over 2.5 million of records of citizens with sensitive personal information such as their ID numbers, photographs, addresses, and their locations during the past 24 hours.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
South Korea’s 13-year-old genius Go girl, Kim Eun-chi, previously became the youngest professional chess player in South Korea’s history, Kim admitted she was assisted by an AI during a Go competition of cyberORO, which was held on Sept. 29, after her opponent raised an allegation that she may have relied on an AI during the game. she was suspended by the South Korean Chess Institute for one year.
None
The U.S. Air Force flew an artificial intelligence (AI) copilot on a U-2 spy plane in California. The flight marked the first time in the history of the Department of Defense that an AI took flight aboard a military aircraft. The AI algorithm, developed by Air Combat Command’s U-2 Federal Laboratory, trained the AI to execute specific in-flight tasks.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
IBM Research developed DeepLocker in 2018 "to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware." "This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
A study from Harvard Medical School in 2019 demonstrated the feasibility of different forms of adversarial attacks on medical machine learning. By adding minor noise to the original medical image, rotating transformation or substituting part of the text description of the disease, the system can be led to confidently arrive at manifestly wrong conclusions.
In August 2019 some white hat researchers proposed a novel easily reproducible technique called “AdvHat,” which employs the rectangular paper stickers produced by a common color printer and put it on the hat. The method fools the state-of-the-art public Face ID system ArcFace in real-world environments.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
A 2018 research has shown that GAN-generated Deepfakes videos are challenging for facial recognition systems, and such a challenge will be even greater when considering the further development of face-swapping technology.
In 2017, a group of researchers showed that it's possible to trick visual classification algorithms by making slight alterations in the physical world. "A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time." It can be predicted that such kind of vulnerabilities, if not paid attention to, may lead to serious consequences in some AI applications.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
About 100 drones lost control and crashed into a building during a show in Southwest China's Chongqing Municipality on Monday night. A person familiar with the matter later disclosed that a crash in the mainframe control led to the incident, in which up to 100 drones lost control and malfunctioned. Although there were no injuries, the incident resulted in huge economic losses for the show designers.
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.