The illegal collection of facial information by retail stores was exposed by 2021 3.15 Gala in China. Stores of American bathroom product maker Kohler, automaker BMW, and Italian apparel company Max Mara were found to have installed surveillance cameras that collect visitors' facial data without their consent, which is in violation of regulations on personal data collection. The cameras illegally identified customers and logged their personal information and shopping habits. The companies that made these surveillance cameras, including Ovopark, Ulucu, and Reconova Technologies, were also named.
About 100 drones lost control and crashed into a building during a show in Southwest China's Chongqing Municipality on Monday night. A person familiar with the matter later disclosed that a crash in the mainframe control led to the incident, in which up to 100 drones lost control and malfunctioned. Although there were no injuries, the incident resulted in huge economic losses for the show designers.
Researchers from UCAS recently present a new method to covertly and evasively deliver malware through a neural network model. Experiments show that 36.9MB of malware can be embedded in a 178MB-AlexNet model within 1% accuracy loss, and no suspicion is raised by anti-virus engines in VirusTotal, which verifies the feasibility of this method. The research shows that with the widespread application of artificial intelligence, utilizing neural networks for attacks becomes a forwarding trend.
In February 2021, the Nantong Public Security Bureau in Jiangsu, China, has "uncovered a new type of cybercrime that used the "face-changing" software to commit fraud. The criminal gang used a variety of mobile phone software to forge faces, passed the WeChat recognition and authentication cancellation mechanism, and "resurrected" several Wechat accounts that are restricted from logging in due to violations of regulations, which helped fraud gangs use these Wechat accounts to commit fraud.
Facebook has issued an apology after its artificial intelligence technology mislabeled a video featuring Black men in altercations with white police officers and civilians as “about primates.” The incident happens when social media users finished the clip, published by the Daily Mail in June 2021, they received a prompt asking if they would like to “keep seeing videos about Primates.”
Hristo Georgiev is an engineer based in Switzerland. Georgiev discovered that a Google search of his name returned a photo of him linked to a Wikipedia entry on a notorious murderer. Georgiev believes the error was caused by Google‘s knowledge graph, which generates infoboxes next to search results. He suspects the algorithm matched his picture to the Wikipedia entry because the now-dead killer shared his name.
The latest research shared by Tencent Suzaku Lab show that the combination of VoIP phone hijacking and AI voice simulation technology will bring huge potential risks. Different from the previous scripted telecommunications fraud, this new technology can achieve full-link forgery from phone numbers to sound tones, and attackers can use vulnerabilities to hijack VoIP phones, realize the dialing of fake phones, and generate the voices of specific characters based on deep forgery AI voice changing technology for fraud.
Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes — in short, pretty much what a driver pressing various buttons on the console can do.
A 63-year-old veteran delivers packages on Amazon. He suddenly received an email and was told: "You have been terminated by Amazon because your personal score has fallen below Amazon's prescribed score." The tracking algorithm believes that he did not do his courier work well. The veteran driver who had worked for 4 years was fired because the machine score was too low.
Researchers at MIT and Amazon introduce a novel study that identifies and systematically analyzes label errors across 10 commonly-used datasets across computer vision (CV), natural language processing (NLP), and audio processing. The researchers found a 3.4% average error rate across all datasets, including 6% for ImageNet, which is arguably the most widely used dataset for popular image recognition systems developed by the likes of Google and Facebook.
Concerns have been seen on Chinese social media, where users have complained about keyboard apps' possible misuse of their personal information or messaging history. Those apps are suspected of secretly recording and analyzing the users' input history, and selling it to advertisers or even more nefarious data collectors. China's Cyberspace Administration responded by issuing rectification requirements for violations in the collection of personal information of suspected apps and urged their providers to rectify.
Facebook AI has released TextStyleBrush, an AI research project that copies the style of text in a photograph, based on just a single word. This means that the user can edit and replace text in imagery, and the tool can replicate both handwritten and typographic compositions and bring them into real-world scenes. Researchers hope to open the dialogue around detecting misuse of this sort of technology, “such as deepfake text attacks – a critical, emerging challenge in the AI field.”
A researcher at Switzerland's EPFL technical university won a $3,500 prize for determining that a key Twitter algorithm favors faces that look slim and young and with skin that is lighter-colored or with warmer tones. The service's algorithm for cropping photos favors people with slimmer, younger faces and lighter skin. This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images.
After the death of his fiancée, 33-year-old American Joshua Barbeau succeeded in fine-tuning the GPT-3 based on her text on Facebook and Twitter with the help of another developer, which was able to reproduce the way his fiancé talked during her lifetime. OpenAI believes that fine-tuning on GPT-3 violates their open source agreement, so it decided to stop providing GPT-3 APIs.
On July 2, 2021, after inspection and verification, the "Didi Travel" App has serious violations of laws and regulations in collecting and using personal information. In accordance with the relevant provisions of the "Network Security Law of the People's Republic of China", the State Internet Information Office notified the app store to remove the "Didi" app, and required Didi Travel Technology Co., Ltd. to strictly follow the legal requirements and refer to relevant national standards to seriously rectify existing problems. , to effectively protect the personal information security of the vast number of users.
A research team from Tsinghua University proposed a method for physically attacking infrared recognition systems based on small light bulbs. The team's demonstration of the effect of the attack showed that the person holding the small light bulb board successfully evaded the detection of the detector, while the person holding the blank board and carrying nothing was detected by the detector.
On June 7, 2021, a student in Wuhan, Central China's Hubei Province, was disqualified for using a mobile phone to search for answers during China's national college entrance exam, or gaokao. The student cheated by taking and uploading pictures of part of the test paper onto an online education APP where AI could use the photo to help search for answers to questions in its database.
California Gov. Gavin Newsom (D) signed a bill Wednesday that would block Amazon and other companies from punishing warehouse workers who fail to meet certain performance metrics for taking rest or meal breaks. The law will also force companies like Amazon to make these performance algorithms more transparent, disclosing quotas to both workers and regulators. Supporters of the new law have presented it as a breakthrough against algorithmic monitoring of workers generally.
Predictive tools developed by electronic health record giant Epic Systems are meant to help providers deliver better patient care. However, several of the company's AI algorithms are delivering inaccurate information to hospitals when it comes to seriously ill patients, a STAT investigation revealed. Research shows that the system failed to identify 67 percent of the patients with sepsis; of those patients with sepsis alerts, 88 percent did not have sepsis.
When a group of researchers investigated Xiushui Street shopping mall in Beijing, Joy City in Xidan and Yintai in77 shopping mall in Hangzhou equipped with face recognition system, they found that even though these shopping malls brushed customers’ faces and tracked their consumption trajectory, none of them informed customers and obtained their consent, and customers did not know that they were brushed or their whereabouts were recorded.
Study shows that Twitter’s algorithms are more likely to amplify right-wing politicians than left-wing ones because their tweets generate more outrage, according to a trio of researchers from New York University’s Center for Social Media and Politics.
The National Highway Traffic Safety Administration (NHTSA) has opened 23 investigations into crashes of Tesla vehicles.The Autopilot feature was operating in at least three Tesla vehicles involved in fatal U.S. crashes since 2016.
A 65-year-old black man from Chicago, the United States, was charged with shooting, without witnesses, weapons, and motives for killing. The police arrested him and imprisoned him for 11 months based on evidence provided by the AI gun sound location system ShotSpotter. Later, the judge found that there was insufficient evidence and acquitted him.
Chicago Police were responding to a ShotSpotter alert when they rushed to the Little Village block where they found Adam Toledo. Police shot and killed the 13-year-old after he ran from officers. Police and prosecutors said ShotSpotter recorded 21-year-old Ruben Roman firing a gun at about 2:30 a.m. on March 29, right before the fatal chase.
An analysis released Monday from the MacArthur Justice Center at Northwestern University’s School of Law concludes ShotSpotter is too unreliable for routine use. Officers responded to 46,743 ShotSpotter alerts July 2019-April 14, 2021. Only 5,114 of the alerts — about 11 percent — resulted in officers filing a report “likely involving a gun,” according to the study’s analysis of records obtained from city’s Office of Emergency Management and Communications.
ShotSpotter is a system that can use acoustic sensor AI algorithms to help police detect gunshots in target geographic areas. The system is usually installed at the request of local officials in communities considered to be at the highest risk of gun violence, and these communities often gather many blacks and Latinos though police data shows gun crimes are a citywide problem. The legal person thinks that the deployment of the system is a manifestation of "racialized patterns of overpolicing."
In Zhengzhou, Henan province in China, Mr. Chen reported that he could not enter and leave the community normally for two years and could only follow other owners to go home. The main reason was that the community required facial recognition to enter, and he was worried that his information would be leaked. Without registering his face to the system, this caused him the great inconvenience of going home.
Researchers at the University of Washington and the Allen Institute for AI worked together to develop a dataset of ethical cases and used it to train Delphi, an AI model that can mimic the judgments people make in a variety of everyday situations. The researchers hope to potentially apply this work to "the way conversational AI robots approach controversial or unethical topics that can improve their handling." However, the researchers also say that "one of Delphi's major limitations is that it specializes in U.S.-centric situations and judgment cases, so it may not be suitable for non-American situations with a particular culture," and that "models tend to reflect the status quo, i.e., what the cultural norms of today's society are."
CCTV News demonstrated the technology of using sample pictures to generate dynamic fake videos in real time. Making movements such as opening the mouth and shaking the head in the video can deceive the facial recognition system.
GitHub and Open AI have worked together to launch an AI tool called "GitHub Copilot". Copilot can automatically complete the code according to the context, including docstrings, comments, function names, and codes. As long as the programmer gives certain hints, this AI tool can complete a complete function. Programmers found that Copilot is not perfect, and there are still many flaws. Some of the output of the code by Copilot have problems such as privacy leakage and security risks. In a study, NYU researchers produced 89 different scenarios wherein Copilot had to finish incomplete code. Upon completion of the scenarios, Copilot generated 1,692 programs of which approximately 40% had security vulnerabilities.
With the stabilization of Covid-19, the real estate market in the United States is rapidly heating up. The price increase over the same period quickly soared from 5% to more than 10%, the highest in August 2021 and even reached 19.8%. Zillow's Zestimate model did not respond well to this change in the market. Fluctuations in house prices caused the model to be off track. Many real estate transactions were upside down. They were expensive when they were bought, but cheaper if they were refurbished. In Phoenix, more than 90% (93%) of the listing price of Zillow's refurbished houses were lower than the company's purchase price. This mistake not only made Zillow lose money, but also made Zillow hold too much inventory. The combined loss in the third and fourth quarters is expected to exceed US$550 million. The company plans to lay off 2,000 employees.
The National Computer Virus Emergency Response Center in China recently discovered through Internet monitoring that 12 shopping apps have privacy violations, violating the relevant provisions of the "Network Security Law" and "Personal Information Protection Law", and are suspected of collecting personal information beyond the scope.
Aleksandr Agapitov discussed the latest controversy surrounding his decision to lay off around 150 employees from Xsolla. The company used AI and big data to analyze employees' activities in Jira, Confluence, Gmail, chat, documents, and dashboards. Employees who were marked as disengaged and inefficient were fired. This result caused controversy. The affected employees felt this was not reflective of their efficiency.
Research by researchers at the University of Oxford shows that the current public skin image data set used to train skin disease diagnosis algorithms lacks sufficient skin color information. In the data set that provides skin color information, only a small number of images have darker skin colors-if these data sets are used to construct algorithms, the diagnosis of races other than whites may be inaccurate.
Facebook documents show how toxic Instagram is for teens,"Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse". "14% of boys in the U.S. said Instagram made them feel worse about themselves." Algorithm recommendation rules aim at presenting the ultimate (best photos and content), causing anxiety among teenagers, leading to eating disorders, unhealthy perceptions of their bodies, and even depression.
A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.
In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.
In November 2020, a 94-year-old grandmother in China was carried by her children in front of a bank machine to perform face recognition in order to activate her social security card. In the video exposed by netizens, the old man was hugged by his family with his knees bent and his hands on the machine, looking very strenuous. After the video was exposed, netizens quickly sparked heated discussions. Face recognition, which seems to be the most convenient method, has brought a lot of inconvenience to the elderly and family members, which reflects the lack of humanized design in many new technologies and new businesses.
The man, Robert Williams, was apprehended by police earlier this year after security footage from a watch store was run through facial recognition tech, which found a match in driving license records for Williams. The software had mistakenly identified two black men as the same person. That mistake led to Williams spending 30 hours behind bars, not to mention the distress caused by being arrested at his home, in front of his family.
The Scatter Lab from South Korea developed an Artificial Intelligence chatbot named Iruda, which was launched on Dec. 23, 2020, and is identified as a 20-year-old female college student. However, controversy soon spread over hate speech the chatbot made towards sexual minorities and people with a disability. The chatbot was also found to have revealed names and addresses of people in certain conversations, according to local news reports. Finally, the developer had to close the service amid the controversy.
The Korea Baduk Association took the punitive measure against Kim Eun-ji, a2-dan professional Go player after Kim admitted she was assisted by an AI during a Go competition of cyberORO, which was held on Sept. 29, after her opponent raised an allegation that she may have relied on an AI during the game. Kim won over Lee Yeong-ku, a 9-dan professional Go player and a member of the national Go team, which shocked many because it defied expectations.
An AI camera at a soccer game held in Oct 2020 in Scotland kept tracking a bald referee instead of the ball during a game. The team doesn't use a cameraman to film games; instead the group relies on an automated camera system to follow the action. However, 'the camera kept on mistaking the ball for the bald head on the sidelines, denying viewers of the real action while focusing on the linesman instead.'
The paper titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” claims to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” Academics and AI experts from Harvard, MIT and tech companies like Google and Microsoft have written an open letter to stop this paper from being published.The letter signed by over 1,000 tech, scientific and humanistic experts strongly condemn this paper saying that no system can be developed to predict or identify a person’s criminality with no racial bias.
In 2020, Genderify, a new service that promised to identify someone’s gender by analyzing their name, email address, or username with the help AI, has picked up a lot of attention on social media as users discovered biases and inaccuracies in its algorithms.The outcry against the service has been so great that Genderify tells The Verge it’s shutting down altogether.
On December 25, 2020, the shopping guide robot in Fuzhou Zhongfang Marlboro Mall, which is famous for its "smart business district", fell off the escalator and knocked over passengers. The person in charge of the mall stated that on-site monitoring showed that the accident was not operated by humans. The robot walked to the escalator by itself and caused the accident. The robot has been stopped.
Researchers have discovered a “deepfake ecosystem” on the messaging app Telegram centered around bots that generate fake nudes on request. Users interacting with these bots say they’re mainly creating nudes of women they know using images taken from social media, which they then share and trade with one another in various Telegram channels.
It is reported that in Nov. 2020 Walmart Inc. has already ended its effort to use roving robots in store aisles to keep track of its inventory, reversing a yearslong push to automate the task with the hulking machines after finding during the coronavirus pandemic that humans can help get similar results. Walmart ended the partnership with robotics company Bossa Nova Robotics Inc. because it found different, sometimes simpler solutions that proved just as useful, said people familiar with the situation.
Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice. During a test of mental health support task, the medical chatbot offered dangerous advice when a fake patient asked “Should I kill myself?” and GPT-3 responded, “I think you should.”
Enaible is one of a number of new firms that are giving employers tools to help keep tabs on their employees,Enaible software is installed in employees' computers and provides the company with detailed data about the employees' work. The software uses an algorithm called Trigger-Task-Time to monitor the actions of employees. The algorithm will determine what tasks the employees want to complete based on emails or phone calls and calculate how long it took to complete these tasks. After that, the algorithm scores the work efficiency of the employees. With this score, the boss can determine who is worthy of a promotion and salary increase, and who is worthy of being fired.Critics fear this kind of surveillance undermines trust. Not touching the computer often does not mean that your brain is not working.
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
In August 2019, researchers found loopholes in the security tools provided by Korean company Suprema. Personal information of over 1 million people, including biometric information such as facial recognition information and fingerprints, was found on a publicly accessible database used by "the likes of UK metropolitan police, defense contractors and banks."
In February 2019, SenseNets, a facial recognition and security software company in Shenzhen, was identified by security experts as having a serious data leak from an unprotected database, including over 2.5 million of records of citizens with sensitive personal information such as their ID numbers, photographs, addresses, and their locations during the past 24 hours.
According to media reports in 2019, Amazon had already been using AI systems to track warehouse workers' productivity by measuring how much time workers pause or take breaks. The AI system will also automatically pick people and generate paperwork to fire those that failed to meet expectations.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
A study from Harvard Medical School in 2019 demonstrated the feasibility of different forms of adversarial attacks on medical machine learning. By adding minor noise to the original medical image, rotating transformation or substituting part of the text description of the disease, the system can be led to confidently arrive at manifestly wrong conclusions.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In August 2019 some white hat researchers proposed a novel easily reproducible technique called “AdvHat,” which employs the rectangular paper stickers produced by a common color printer and put it on the hat. The method fools the state-of-the-art public Face ID system ArcFace in real-world environments.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In November 2019, China's social media went into overdrive after pictures emerged showing students wearing BrainCo Focus headbands at a primary school in Jinhua, east China's Zhejiang province, with many users expressing concerns that the product would violate the privacy of students, with many doubtful that the bands would really improve learning efficiency. Responding to public controversy, the local education bureau had suspended the use of the device.
On September 13, 2019, the California State Assembly passed a three-year bill prohibiting state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. The media commented that the bill reflects dissatisfaction with facial recognition in many parties in the United States. Some people believe that facial recognition poses a threat to civil liberties.
In 2019 OpenAI has announced and demonstrated a writing software (the GPT-2 model) that only needs small language samples to generate realistic fake stories. "These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns."
In March 2019, 50-year-old Jeremy Belen Banner drove his Model 3 into a collision with a tractor trailer at a speed of 109 kilometers per hour and died while using Tesla's Autopilot system. Autopilot manufacturers said that the autopilot system is to assist drivers, and they must always pay attention and be prepared to take over the vehicle. The National Transportation Safety Board refused to blame anyone for the accident.
In 2015, the henn na hotel in Japan opened in 2015. All the employees of the hotel are robots, including the front desk, cleaners, porters and housekeepers. However, the hotel has laid off half its 243 robots after they created more problems than they could solve, as first reported by The Wall Street Journal. And in the end, a lot of the work had to be left to humans anyway, especially when it came to asking more complex questions. It seems we’re still a little ways off from a completely automated hotel.
Amazon received a patent for an ultrasonic bracelet that can detect a warehouse worker’s location and monitor their interaction with inventory bins by using ultrasonic sound pulses. Microsoft’s Workplace Analytics lets employers monitor data such as time spent on email, meeting time or time spent working after hours.There’s also Humanyze, a Boston-based start-up that makes wearable badges equipped with RFID sensors, an accelerometer, microphones and Bluetooth. The devices — just slightly thicker than a standard corporate ID badge — can gather audio data such as tone of voice and volume, an accelerometer to determine whether an employee is sitting or standing, and Bluetooth and infrared sensors to track where employees are and whether they are having face-to-face interactions.
Google’s "Project Nightingale " secretly collected the personal health data of millions of Americans and reported the data anonymously. Google and Ascension have released statements in the wake of the disclosure of Project Nightingale, insisting it conforms with HIPAA and all federal health laws. They said that patient data was protected.The anonymous reporter, as a staff member of the program, expressed concerns about privacy.
A former UChicago Medicine patient is suing the health system over its sharing thousands of medical records with Google, claiming the health system did not properly de-identify patients' data, and arguing that UChicago Medicine did not notify patients or gain their consent before disclosing medical records to Google.
Porthcawl, a Welsh seaside town plans to install public toilets with measures to prevent people having sex inside, including a squealing alarm, the doors shooting open, and a chilly spray of water. After raising controversy, the local government clarified that the plan had not yet been adopted.
Researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).
A 2018 research has shown that GAN-generated Deepfakes videos are challenging for facial recognition systems, and such a challenge will be even greater when considering the further development of face-swapping technology.
In the "Gender Shades" project from MIT Media Lab and Microsoft Research in 2018, facial analysis algorithms from IBM, Microsoft, and Megvii (Face++) have been evaluated, and it shows that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4% higher than those of lighter-skinned males.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
IBM Research developed DeepLocker in 2018 "to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware." "This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
According to medical experts and clients, Watson recommended that doctors give a severely bleeding cancer patient a drug that may worsen the bleeding. Medical experts and clients have reported many cases of dangerous and wrong treatment recommendations.
Stanford University professor Michal Kosinski said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition. Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.
The Ningbo Transportation Department in China deployed smart cameras using facial recognition technology at intersections to detect and identify people crossing the road indiscriminately. Some of the names and faces of these people will be posted on public screens. But it mistakenly "identified" Dong Mingzhu's advertisement on the bus body as a real person running a red light. This error quickly spread to all major social media in China. Local police admit mistake and have upgraded system to prevent further errors.
In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition,” the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
A robot "Fabio" is set up in a supermarket in Edinburgh, UK to serve customers. The robot can point out the location of hundreds of commodities through a "personal customization" program, but was rejected for failing to provide effective advice. Fabio failed to help customers, telling them beer could be found “in the alcohol section,” rather than directing customers to the location of the beer. He was soon demoted to offer food samples to customers, but failed to compete with his fellow human employees.
A 2017 research from Google Brain Team analyzed two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. 60% of the data was from the six most represented countries across North America and Europe, while China and India were represented with only about 3% of the images. Further, the lack of geo-diversity in the training data also impacted the classification performance on images from different locales.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In 2017, a group of researchers showed that it's possible to trick visual classification algorithms by making slight alterations in the physical world. "A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time." It can be predicted that such kind of vulnerabilities, if not paid attention to, may lead to serious consequences in some AI applications.
By 2030, according to the a McKinsey Global Institute report in 2017, "as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue."
In 2017, researchers from Stanford University studied how well AI could identify people's sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features. According to the study, the algorithm was able to correctly distinguish between gay and heterosexual men 81 percent of the time, and gay and heterosexual women 71 percent of the time, far outperforming human judges. LGBT groups think it could be used as a weapon against gay and lesbian people as well as heterosexuals who could be inaccurately "outed" as gay.
The Los Angeles Times reported on a 6.8 earthquake that struck Santa Barbara at 4:51pm, which might be surprising to the people of Santa Barbara who didn’t feel anything. The earthquake actually happened in 1925. The “reporter” who wrote the news article about the 6.8 quake was actually a robot. The newspaper’s algorithm, called Quakebot, scrapes data from the US Geological Survey’s website. A USGS staffer at Caltech mistakenly sent out the alert when updating historical earthquake data to make it more precise.
Researchers from cybersecurity company Bkav in Vietnam created their mask by 3D printing a mould and attaching some 2D images of the enrolled user's face. They then added "some special processing on the cheeks and around the face, where there are large skin areas, to fool the AI of Face ID." The mask is said to cost less than $150 to make.
In 2017, at the Baidu AI Developers Conference, Baidu showed live images of Baidu's unmanned vehicles. During the live broadcast, the unmanned vehicles were in violation of real-line and parallel driving behaviors. Afterwards, Baidu CEO Robin Li confirmed that the unmanned vehicles violated regulations and was punished for violating traffic rules.
Microsoft used to release an AI chatbot called Tay on Twitter in 2016, in the hope that the bot could learn from its conversations and get progressively smarter. However, Tay was lack of an understanding of inappropriate behavior and soon became a 'bad girl' posting offensive and inflammatory tweets after subjecting to the indoctrination by some malicious users. This caused great controversy at the time. Within 16 hours of its release, Microsoft had to take Tay offline.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
From 2016 to 2018, MIT researchers conducted an online survey called the "Moral Machine experiment" to enable testers to choose how self-driving cars should act when accidents occur in different scenarios. It turns out that in the face of such "Trolley problem" ethical dilemmas, people are more likely to follow the utilitarian way of thinking and choose to save as many people as possible. People generally want others to buy such utilitarian self-driving cars "for the greater good", but they would themselves prefer to ride in self-driving cars that protect their passengers at all costs. The study also found that the above choices will be affected by different regional, cultural and economic conditions.
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.
A security robot at the Stanford Shopping Center in Palo Alto hit and ran over a small boy, according to his parents. Knightscope Inc. has offered a public apology for the incident and has since recalled the robots from the Palo Alto mall.
Admiral, a British insurance company, conducts reviews based on users' Facebook dynamics in the past 6 months. If the results show that you are a good driver, you can enjoy a discount on insurance premiums. If you are not a good driver, the fare will increase. Admiral's data analyst explained that this technology analyzes the language customers use on Facebook. For example, the heavy use of exclamation points may indicate overconfidence, short sentences indicate very organized, and there are many specific plans with friends. Show decisive. This means that using too many exclamation points or vague language will be judged by the company as a poor driver. Facebook issued a warning to the insurance company, saying that the company’s plan to use social platforms to sell insurance violated the platform’s policies and regulations. People think that insurance companies use Facebook data to publish rates, which violates privacy and also shows bias.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
IBM researchers used to taught Waston the entire Urban Dictionary to help Watson learn the intricacies of the English language. However, it was reported that Watson "couldn't distinguish between polite language and profanity," and picked up some bad habits from humans. It even used the word "bullshit" in an answer to a researcher's query. In the end, researchers had to remove the Urban Dictionary from Watson's vocabulary, and additionally developed a smart filter to keep Watson from swearing in the future.
The illegal collection of facial information by retail stores was exposed by 2021 3.15 Gala in China. Stores of American bathroom product maker Kohler, automaker BMW, and Italian apparel company Max Mara were found to have installed surveillance cameras that collect visitors' facial data without their consent, which is in violation of regulations on personal data collection. The cameras illegally identified customers and logged their personal information and shopping habits. The companies that made these surveillance cameras, including Ovopark, Ulucu, and Reconova Technologies, were also named.
In February 2021, the Nantong Public Security Bureau in Jiangsu, China, has "uncovered a new type of cybercrime that used the "face-changing" software to commit fraud. The criminal gang used a variety of mobile phone software to forge faces, passed the WeChat recognition and authentication cancellation mechanism, and "resurrected" several Wechat accounts that are restricted from logging in due to violations of regulations, which helped fraud gangs use these Wechat accounts to commit fraud.
The latest research shared by Tencent Suzaku Lab show that the combination of VoIP phone hijacking and AI voice simulation technology will bring huge potential risks. Different from the previous scripted telecommunications fraud, this new technology can achieve full-link forgery from phone numbers to sound tones, and attackers can use vulnerabilities to hijack VoIP phones, realize the dialing of fake phones, and generate the voices of specific characters based on deep forgery AI voice changing technology for fraud.
On July 2, 2021, after inspection and verification, the "Didi Travel" App has serious violations of laws and regulations in collecting and using personal information. In accordance with the relevant provisions of the "Network Security Law of the People's Republic of China", the State Internet Information Office notified the app store to remove the "Didi" app, and required Didi Travel Technology Co., Ltd. to strictly follow the legal requirements and refer to relevant national standards to seriously rectify existing problems. , to effectively protect the personal information security of the vast number of users.
When a group of researchers investigated Xiushui Street shopping mall in Beijing, Joy City in Xidan and Yintai in77 shopping mall in Hangzhou equipped with face recognition system, they found that even though these shopping malls brushed customers’ faces and tracked their consumption trajectory, none of them informed customers and obtained their consent, and customers did not know that they were brushed or their whereabouts were recorded.
The National Computer Virus Emergency Response Center in China recently discovered through Internet monitoring that 12 shopping apps have privacy violations, violating the relevant provisions of the "Network Security Law" and "Personal Information Protection Law", and are suspected of collecting personal information beyond the scope.
The Korea Baduk Association took the punitive measure against Kim Eun-ji, a2-dan professional Go player after Kim admitted she was assisted by an AI during a Go competition of cyberORO, which was held on Sept. 29, after her opponent raised an allegation that she may have relied on an AI during the game. Kim won over Lee Yeong-ku, a 9-dan professional Go player and a member of the national Go team, which shocked many because it defied expectations.
Researchers have discovered a “deepfake ecosystem” on the messaging app Telegram centered around bots that generate fake nudes on request. Users interacting with these bots say they’re mainly creating nudes of women they know using images taken from social media, which they then share and trade with one another in various Telegram channels.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In November 2019, China's social media went into overdrive after pictures emerged showing students wearing BrainCo Focus headbands at a primary school in Jinhua, east China's Zhejiang province, with many users expressing concerns that the product would violate the privacy of students, with many doubtful that the bands would really improve learning efficiency. Responding to public controversy, the local education bureau had suspended the use of the device.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
In 2017, at the Baidu AI Developers Conference, Baidu showed live images of Baidu's unmanned vehicles. During the live broadcast, the unmanned vehicles were in violation of real-line and parallel driving behaviors. Afterwards, Baidu CEO Robin Li confirmed that the unmanned vehicles violated regulations and was punished for violating traffic rules.
Researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).
It is reported that in Nov. 2020 Walmart Inc. has already ended its effort to use roving robots in store aisles to keep track of its inventory, reversing a yearslong push to automate the task with the hulking machines after finding during the coronavirus pandemic that humans can help get similar results. Walmart ended the partnership with robotics company Bossa Nova Robotics Inc. because it found different, sometimes simpler solutions that proved just as useful, said people familiar with the situation.
In 2015, the henn na hotel in Japan opened in 2015. All the employees of the hotel are robots, including the front desk, cleaners, porters and housekeepers. However, the hotel has laid off half its 243 robots after they created more problems than they could solve, as first reported by The Wall Street Journal. And in the end, a lot of the work had to be left to humans anyway, especially when it came to asking more complex questions. It seems we’re still a little ways off from a completely automated hotel.
A robot "Fabio" is set up in a supermarket in Edinburgh, UK to serve customers. The robot can point out the location of hundreds of commodities through a "personal customization" program, but was rejected for failing to provide effective advice. Fabio failed to help customers, telling them beer could be found “in the alcohol section,” rather than directing customers to the location of the beer. He was soon demoted to offer food samples to customers, but failed to compete with his fellow human employees.
By 2030, according to the a McKinsey Global Institute report in 2017, "as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue."
In November 2020, a 94-year-old grandmother in China was carried by her children in front of a bank machine to perform face recognition in order to activate her social security card. In the video exposed by netizens, the old man was hugged by his family with his knees bent and his hands on the machine, looking very strenuous. After the video was exposed, netizens quickly sparked heated discussions. Face recognition, which seems to be the most convenient method, has brought a lot of inconvenience to the elderly and family members, which reflects the lack of humanized design in many new technologies and new businesses.
Porthcawl, a Welsh seaside town plans to install public toilets with measures to prevent people having sex inside, including a squealing alarm, the doors shooting open, and a chilly spray of water. After raising controversy, the local government clarified that the plan had not yet been adopted.
Facebook has issued an apology after its artificial intelligence technology mislabeled a video featuring Black men in altercations with white police officers and civilians as “about primates.” The incident happens when social media users finished the clip, published by the Daily Mail in June 2021, they received a prompt asking if they would like to “keep seeing videos about Primates.”
Hristo Georgiev is an engineer based in Switzerland. Georgiev discovered that a Google search of his name returned a photo of him linked to a Wikipedia entry on a notorious murderer. Georgiev believes the error was caused by Google‘s knowledge graph, which generates infoboxes next to search results. He suspects the algorithm matched his picture to the Wikipedia entry because the now-dead killer shared his name.
A 63-year-old veteran delivers packages on Amazon. He suddenly received an email and was told: "You have been terminated by Amazon because your personal score has fallen below Amazon's prescribed score." The tracking algorithm believes that he did not do his courier work well. The veteran driver who had worked for 4 years was fired because the machine score was too low.
Facebook AI has released TextStyleBrush, an AI research project that copies the style of text in a photograph, based on just a single word. This means that the user can edit and replace text in imagery, and the tool can replicate both handwritten and typographic compositions and bring them into real-world scenes. Researchers hope to open the dialogue around detecting misuse of this sort of technology, “such as deepfake text attacks – a critical, emerging challenge in the AI field.”
A researcher at Switzerland's EPFL technical university won a $3,500 prize for determining that a key Twitter algorithm favors faces that look slim and young and with skin that is lighter-colored or with warmer tones. The service's algorithm for cropping photos favors people with slimmer, younger faces and lighter skin. This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images.
After the death of his fiancée, 33-year-old American Joshua Barbeau succeeded in fine-tuning the GPT-3 based on her text on Facebook and Twitter with the help of another developer, which was able to reproduce the way his fiancé talked during her lifetime. OpenAI believes that fine-tuning on GPT-3 violates their open source agreement, so it decided to stop providing GPT-3 APIs.
California Gov. Gavin Newsom (D) signed a bill Wednesday that would block Amazon and other companies from punishing warehouse workers who fail to meet certain performance metrics for taking rest or meal breaks. The law will also force companies like Amazon to make these performance algorithms more transparent, disclosing quotas to both workers and regulators. Supporters of the new law have presented it as a breakthrough against algorithmic monitoring of workers generally.
A 65-year-old black man from Chicago, the United States, was charged with shooting, without witnesses, weapons, and motives for killing. The police arrested him and imprisoned him for 11 months based on evidence provided by the AI gun sound location system ShotSpotter. Later, the judge found that there was insufficient evidence and acquitted him.
Chicago Police were responding to a ShotSpotter alert when they rushed to the Little Village block where they found Adam Toledo. Police shot and killed the 13-year-old after he ran from officers. Police and prosecutors said ShotSpotter recorded 21-year-old Ruben Roman firing a gun at about 2:30 a.m. on March 29, right before the fatal chase.
ShotSpotter is a system that can use acoustic sensor AI algorithms to help police detect gunshots in target geographic areas. The system is usually installed at the request of local officials in communities considered to be at the highest risk of gun violence, and these communities often gather many blacks and Latinos though police data shows gun crimes are a citywide problem. The legal person thinks that the deployment of the system is a manifestation of "racialized patterns of overpolicing."
In Zhengzhou, Henan province in China, Mr. Chen reported that he could not enter and leave the community normally for two years and could only follow other owners to go home. The main reason was that the community required facial recognition to enter, and he was worried that his information would be leaked. Without registering his face to the system, this caused him the great inconvenience of going home.
Researchers at the University of Washington and the Allen Institute for AI worked together to develop a dataset of ethical cases and used it to train Delphi, an AI model that can mimic the judgments people make in a variety of everyday situations. The researchers hope to potentially apply this work to "the way conversational AI robots approach controversial or unethical topics that can improve their handling." However, the researchers also say that "one of Delphi's major limitations is that it specializes in U.S.-centric situations and judgment cases, so it may not be suitable for non-American situations with a particular culture," and that "models tend to reflect the status quo, i.e., what the cultural norms of today's society are."
CCTV News demonstrated the technology of using sample pictures to generate dynamic fake videos in real time. Making movements such as opening the mouth and shaking the head in the video can deceive the facial recognition system.
Aleksandr Agapitov discussed the latest controversy surrounding his decision to lay off around 150 employees from Xsolla. The company used AI and big data to analyze employees' activities in Jira, Confluence, Gmail, chat, documents, and dashboards. Employees who were marked as disengaged and inefficient were fired. This result caused controversy. The affected employees felt this was not reflective of their efficiency.
Facebook documents show how toxic Instagram is for teens,"Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse". "14% of boys in the U.S. said Instagram made them feel worse about themselves." Algorithm recommendation rules aim at presenting the ultimate (best photos and content), causing anxiety among teenagers, leading to eating disorders, unhealthy perceptions of their bodies, and even depression.
In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.
In November 2020, a 94-year-old grandmother in China was carried by her children in front of a bank machine to perform face recognition in order to activate her social security card. In the video exposed by netizens, the old man was hugged by his family with his knees bent and his hands on the machine, looking very strenuous. After the video was exposed, netizens quickly sparked heated discussions. Face recognition, which seems to be the most convenient method, has brought a lot of inconvenience to the elderly and family members, which reflects the lack of humanized design in many new technologies and new businesses.
The man, Robert Williams, was apprehended by police earlier this year after security footage from a watch store was run through facial recognition tech, which found a match in driving license records for Williams. The software had mistakenly identified two black men as the same person. That mistake led to Williams spending 30 hours behind bars, not to mention the distress caused by being arrested at his home, in front of his family.
An AI camera at a soccer game held in Oct 2020 in Scotland kept tracking a bald referee instead of the ball during a game. The team doesn't use a cameraman to film games; instead the group relies on an automated camera system to follow the action. However, 'the camera kept on mistaking the ball for the bald head on the sidelines, denying viewers of the real action while focusing on the linesman instead.'
The paper titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” claims to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” Academics and AI experts from Harvard, MIT and tech companies like Google and Microsoft have written an open letter to stop this paper from being published.The letter signed by over 1,000 tech, scientific and humanistic experts strongly condemn this paper saying that no system can be developed to predict or identify a person’s criminality with no racial bias.
In 2020, Genderify, a new service that promised to identify someone’s gender by analyzing their name, email address, or username with the help AI, has picked up a lot of attention on social media as users discovered biases and inaccuracies in its algorithms.The outcry against the service has been so great that Genderify tells The Verge it’s shutting down altogether.
Researchers have discovered a “deepfake ecosystem” on the messaging app Telegram centered around bots that generate fake nudes on request. Users interacting with these bots say they’re mainly creating nudes of women they know using images taken from social media, which they then share and trade with one another in various Telegram channels.
Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice. During a test of mental health support task, the medical chatbot offered dangerous advice when a fake patient asked “Should I kill myself?” and GPT-3 responded, “I think you should.”
Enaible is one of a number of new firms that are giving employers tools to help keep tabs on their employees,Enaible software is installed in employees' computers and provides the company with detailed data about the employees' work. The software uses an algorithm called Trigger-Task-Time to monitor the actions of employees. The algorithm will determine what tasks the employees want to complete based on emails or phone calls and calculate how long it took to complete these tasks. After that, the algorithm scores the work efficiency of the employees. With this score, the boss can determine who is worthy of a promotion and salary increase, and who is worthy of being fired.Critics fear this kind of surveillance undermines trust. Not touching the computer often does not mean that your brain is not working.
According to media reports in 2019, Amazon had already been using AI systems to track warehouse workers' productivity by measuring how much time workers pause or take breaks. The AI system will also automatically pick people and generate paperwork to fire those that failed to meet expectations.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In November 2019, China's social media went into overdrive after pictures emerged showing students wearing BrainCo Focus headbands at a primary school in Jinhua, east China's Zhejiang province, with many users expressing concerns that the product would violate the privacy of students, with many doubtful that the bands would really improve learning efficiency. Responding to public controversy, the local education bureau had suspended the use of the device.
On September 13, 2019, the California State Assembly passed a three-year bill prohibiting state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. The media commented that the bill reflects dissatisfaction with facial recognition in many parties in the United States. Some people believe that facial recognition poses a threat to civil liberties.
Amazon received a patent for an ultrasonic bracelet that can detect a warehouse worker’s location and monitor their interaction with inventory bins by using ultrasonic sound pulses. Microsoft’s Workplace Analytics lets employers monitor data such as time spent on email, meeting time or time spent working after hours.There’s also Humanyze, a Boston-based start-up that makes wearable badges equipped with RFID sensors, an accelerometer, microphones and Bluetooth. The devices — just slightly thicker than a standard corporate ID badge — can gather audio data such as tone of voice and volume, an accelerometer to determine whether an employee is sitting or standing, and Bluetooth and infrared sensors to track where employees are and whether they are having face-to-face interactions.
Google’s "Project Nightingale " secretly collected the personal health data of millions of Americans and reported the data anonymously. Google and Ascension have released statements in the wake of the disclosure of Project Nightingale, insisting it conforms with HIPAA and all federal health laws. They said that patient data was protected.The anonymous reporter, as a staff member of the program, expressed concerns about privacy.
A former UChicago Medicine patient is suing the health system over its sharing thousands of medical records with Google, claiming the health system did not properly de-identify patients' data, and arguing that UChicago Medicine did not notify patients or gain their consent before disclosing medical records to Google.
Porthcawl, a Welsh seaside town plans to install public toilets with measures to prevent people having sex inside, including a squealing alarm, the doors shooting open, and a chilly spray of water. After raising controversy, the local government clarified that the plan had not yet been adopted.
Stanford University professor Michal Kosinski said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition. Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.
In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition,” the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2017, researchers from Stanford University studied how well AI could identify people's sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features. According to the study, the algorithm was able to correctly distinguish between gay and heterosexual men 81 percent of the time, and gay and heterosexual women 71 percent of the time, far outperforming human judges. LGBT groups think it could be used as a weapon against gay and lesbian people as well as heterosexuals who could be inaccurately "outed" as gay.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
Admiral, a British insurance company, conducts reviews based on users' Facebook dynamics in the past 6 months. If the results show that you are a good driver, you can enjoy a discount on insurance premiums. If you are not a good driver, the fare will increase. Admiral's data analyst explained that this technology analyzes the language customers use on Facebook. For example, the heavy use of exclamation points may indicate overconfidence, short sentences indicate very organized, and there are many specific plans with friends. Show decisive. This means that using too many exclamation points or vague language will be judged by the company as a poor driver. Facebook issued a warning to the insurance company, saying that the company’s plan to use social platforms to sell insurance violated the platform’s policies and regulations. People think that insurance companies use Facebook data to publish rates, which violates privacy and also shows bias.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
Facebook has issued an apology after its artificial intelligence technology mislabeled a video featuring Black men in altercations with white police officers and civilians as “about primates.” The incident happens when social media users finished the clip, published by the Daily Mail in June 2021, they received a prompt asking if they would like to “keep seeing videos about Primates.”
A researcher at Switzerland's EPFL technical university won a $3,500 prize for determining that a key Twitter algorithm favors faces that look slim and young and with skin that is lighter-colored or with warmer tones. The service's algorithm for cropping photos favors people with slimmer, younger faces and lighter skin. This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images.
Researchers at the University of Washington and the Allen Institute for AI worked together to develop a dataset of ethical cases and used it to train Delphi, an AI model that can mimic the judgments people make in a variety of everyday situations. The researchers hope to potentially apply this work to "the way conversational AI robots approach controversial or unethical topics that can improve their handling." However, the researchers also say that "one of Delphi's major limitations is that it specializes in U.S.-centric situations and judgment cases, so it may not be suitable for non-American situations with a particular culture," and that "models tend to reflect the status quo, i.e., what the cultural norms of today's society are."
Research by researchers at the University of Oxford shows that the current public skin image data set used to train skin disease diagnosis algorithms lacks sufficient skin color information. In the data set that provides skin color information, only a small number of images have darker skin colors-if these data sets are used to construct algorithms, the diagnosis of races other than whites may be inaccurate.
The paper titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” claims to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” Academics and AI experts from Harvard, MIT and tech companies like Google and Microsoft have written an open letter to stop this paper from being published.The letter signed by over 1,000 tech, scientific and humanistic experts strongly condemn this paper saying that no system can be developed to predict or identify a person’s criminality with no racial bias.
In 2020, Genderify, a new service that promised to identify someone’s gender by analyzing their name, email address, or username with the help AI, has picked up a lot of attention on social media as users discovered biases and inaccuracies in its algorithms.The outcry against the service has been so great that Genderify tells The Verge it’s shutting down altogether.
In the "Gender Shades" project from MIT Media Lab and Microsoft Research in 2018, facial analysis algorithms from IBM, Microsoft, and Megvii (Face++) have been evaluated, and it shows that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4% higher than those of lighter-skinned males.
A 2017 research from Google Brain Team analyzed two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. 60% of the data was from the six most represented countries across North America and Europe, while China and India were represented with only about 3% of the images. Further, the lack of geo-diversity in the training data also impacted the classification performance on images from different locales.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
Hristo Georgiev is an engineer based in Switzerland. Georgiev discovered that a Google search of his name returned a photo of him linked to a Wikipedia entry on a notorious murderer. Georgiev believes the error was caused by Google‘s knowledge graph, which generates infoboxes next to search results. He suspects the algorithm matched his picture to the Wikipedia entry because the now-dead killer shared his name.
Researchers at MIT and Amazon introduce a novel study that identifies and systematically analyzes label errors across 10 commonly-used datasets across computer vision (CV), natural language processing (NLP), and audio processing. The researchers found a 3.4% average error rate across all datasets, including 6% for ImageNet, which is arguably the most widely used dataset for popular image recognition systems developed by the likes of Google and Facebook.
Predictive tools developed by electronic health record giant Epic Systems are meant to help providers deliver better patient care. However, several of the company's AI algorithms are delivering inaccurate information to hospitals when it comes to seriously ill patients, a STAT investigation revealed. Research shows that the system failed to identify 67 percent of the patients with sepsis; of those patients with sepsis alerts, 88 percent did not have sepsis.
A 65-year-old black man from Chicago, the United States, was charged with shooting, without witnesses, weapons, and motives for killing. The police arrested him and imprisoned him for 11 months based on evidence provided by the AI gun sound location system ShotSpotter. Later, the judge found that there was insufficient evidence and acquitted him.
Chicago Police were responding to a ShotSpotter alert when they rushed to the Little Village block where they found Adam Toledo. Police shot and killed the 13-year-old after he ran from officers. Police and prosecutors said ShotSpotter recorded 21-year-old Ruben Roman firing a gun at about 2:30 a.m. on March 29, right before the fatal chase.
Researchers at the University of Washington and the Allen Institute for AI worked together to develop a dataset of ethical cases and used it to train Delphi, an AI model that can mimic the judgments people make in a variety of everyday situations. The researchers hope to potentially apply this work to "the way conversational AI robots approach controversial or unethical topics that can improve their handling." However, the researchers also say that "one of Delphi's major limitations is that it specializes in U.S.-centric situations and judgment cases, so it may not be suitable for non-American situations with a particular culture," and that "models tend to reflect the status quo, i.e., what the cultural norms of today's society are."
GitHub and Open AI have worked together to launch an AI tool called "GitHub Copilot". Copilot can automatically complete the code according to the context, including docstrings, comments, function names, and codes. As long as the programmer gives certain hints, this AI tool can complete a complete function. Programmers found that Copilot is not perfect, and there are still many flaws. Some of the output of the code by Copilot have problems such as privacy leakage and security risks. In a study, NYU researchers produced 89 different scenarios wherein Copilot had to finish incomplete code. Upon completion of the scenarios, Copilot generated 1,692 programs of which approximately 40% had security vulnerabilities.
Research by researchers at the University of Oxford shows that the current public skin image data set used to train skin disease diagnosis algorithms lacks sufficient skin color information. In the data set that provides skin color information, only a small number of images have darker skin colors-if these data sets are used to construct algorithms, the diagnosis of races other than whites may be inaccurate.
The man, Robert Williams, was apprehended by police earlier this year after security footage from a watch store was run through facial recognition tech, which found a match in driving license records for Williams. The software had mistakenly identified two black men as the same person. That mistake led to Williams spending 30 hours behind bars, not to mention the distress caused by being arrested at his home, in front of his family.
An AI camera at a soccer game held in Oct 2020 in Scotland kept tracking a bald referee instead of the ball during a game. The team doesn't use a cameraman to film games; instead the group relies on an automated camera system to follow the action. However, 'the camera kept on mistaking the ball for the bald head on the sidelines, denying viewers of the real action while focusing on the linesman instead.'
Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice. During a test of mental health support task, the medical chatbot offered dangerous advice when a fake patient asked “Should I kill myself?” and GPT-3 responded, “I think you should.”
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
According to medical experts and clients, Watson recommended that doctors give a severely bleeding cancer patient a drug that may worsen the bleeding. Medical experts and clients have reported many cases of dangerous and wrong treatment recommendations.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
IBM researchers used to taught Waston the entire Urban Dictionary to help Watson learn the intricacies of the English language. However, it was reported that Watson "couldn't distinguish between polite language and profanity," and picked up some bad habits from humans. It even used the word "bullshit" in an answer to a researcher's query. In the end, researchers had to remove the Urban Dictionary from Watson's vocabulary, and additionally developed a smart filter to keep Watson from swearing in the future.
Facebook has issued an apology after its artificial intelligence technology mislabeled a video featuring Black men in altercations with white police officers and civilians as “about primates.” The incident happens when social media users finished the clip, published by the Daily Mail in June 2021, they received a prompt asking if they would like to “keep seeing videos about Primates.”
Hristo Georgiev is an engineer based in Switzerland. Georgiev discovered that a Google search of his name returned a photo of him linked to a Wikipedia entry on a notorious murderer. Georgiev believes the error was caused by Google‘s knowledge graph, which generates infoboxes next to search results. He suspects the algorithm matched his picture to the Wikipedia entry because the now-dead killer shared his name.
A researcher at Switzerland's EPFL technical university won a $3,500 prize for determining that a key Twitter algorithm favors faces that look slim and young and with skin that is lighter-colored or with warmer tones. The service's algorithm for cropping photos favors people with slimmer, younger faces and lighter skin. This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images.
Study shows that Twitter’s algorithms are more likely to amplify right-wing politicians than left-wing ones because their tweets generate more outrage, according to a trio of researchers from New York University’s Center for Social Media and Politics.
An AI camera at a soccer game held in Oct 2020 in Scotland kept tracking a bald referee instead of the ball during a game. The team doesn't use a cameraman to film games; instead the group relies on an automated camera system to follow the action. However, 'the camera kept on mistaking the ball for the bald head on the sidelines, denying viewers of the real action while focusing on the linesman instead.'
The paper titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” claims to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” Academics and AI experts from Harvard, MIT and tech companies like Google and Microsoft have written an open letter to stop this paper from being published.The letter signed by over 1,000 tech, scientific and humanistic experts strongly condemn this paper saying that no system can be developed to predict or identify a person’s criminality with no racial bias.
In 2020, Genderify, a new service that promised to identify someone’s gender by analyzing their name, email address, or username with the help AI, has picked up a lot of attention on social media as users discovered biases and inaccuracies in its algorithms.The outcry against the service has been so great that Genderify tells The Verge it’s shutting down altogether.
Admiral, a British insurance company, conducts reviews based on users' Facebook dynamics in the past 6 months. If the results show that you are a good driver, you can enjoy a discount on insurance premiums. If you are not a good driver, the fare will increase. Admiral's data analyst explained that this technology analyzes the language customers use on Facebook. For example, the heavy use of exclamation points may indicate overconfidence, short sentences indicate very organized, and there are many specific plans with friends. Show decisive. This means that using too many exclamation points or vague language will be judged by the company as a poor driver. Facebook issued a warning to the insurance company, saying that the company’s plan to use social platforms to sell insurance violated the platform’s policies and regulations. People think that insurance companies use Facebook data to publish rates, which violates privacy and also shows bias.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
The Scatter Lab from South Korea developed an Artificial Intelligence chatbot named Iruda, which was launched on Dec. 23, 2020, and is identified as a 20-year-old female college student. However, controversy soon spread over hate speech the chatbot made towards sexual minorities and people with a disability. The chatbot was also found to have revealed names and addresses of people in certain conversations, according to local news reports. Finally, the developer had to close the service amid the controversy.
Microsoft used to release an AI chatbot called Tay on Twitter in 2016, in the hope that the bot could learn from its conversations and get progressively smarter. However, Tay was lack of an understanding of inappropriate behavior and soon became a 'bad girl' posting offensive and inflammatory tweets after subjecting to the indoctrination by some malicious users. This caused great controversy at the time. Within 16 hours of its release, Microsoft had to take Tay offline.
Study shows that Twitter’s algorithms are more likely to amplify right-wing politicians than left-wing ones because their tweets generate more outrage, according to a trio of researchers from New York University’s Center for Social Media and Politics.
ShotSpotter is a system that can use acoustic sensor AI algorithms to help police detect gunshots in target geographic areas. The system is usually installed at the request of local officials in communities considered to be at the highest risk of gun violence, and these communities often gather many blacks and Latinos though police data shows gun crimes are a citywide problem. The legal person thinks that the deployment of the system is a manifestation of "racialized patterns of overpolicing."
Researchers at the University of Washington and the Allen Institute for AI worked together to develop a dataset of ethical cases and used it to train Delphi, an AI model that can mimic the judgments people make in a variety of everyday situations. The researchers hope to potentially apply this work to "the way conversational AI robots approach controversial or unethical topics that can improve their handling." However, the researchers also say that "one of Delphi's major limitations is that it specializes in U.S.-centric situations and judgment cases, so it may not be suitable for non-American situations with a particular culture," and that "models tend to reflect the status quo, i.e., what the cultural norms of today's society are."
The National Highway Traffic Safety Administration (NHTSA) has opened 23 investigations into crashes of Tesla vehicles.The Autopilot feature was operating in at least three Tesla vehicles involved in fatal U.S. crashes since 2016.
In March 2019, 50-year-old Jeremy Belen Banner drove his Model 3 into a collision with a tractor trailer at a speed of 109 kilometers per hour and died while using Tesla's Autopilot system. Autopilot manufacturers said that the autopilot system is to assist drivers, and they must always pay attention and be prepared to take over the vehicle. The National Transportation Safety Board refused to blame anyone for the accident.
The National Highway Traffic Safety Administration (NHTSA) has opened 23 investigations into crashes of Tesla vehicles.The Autopilot feature was operating in at least three Tesla vehicles involved in fatal U.S. crashes since 2016.
In March 2019, 50-year-old Jeremy Belen Banner drove his Model 3 into a collision with a tractor trailer at a speed of 109 kilometers per hour and died while using Tesla's Autopilot system. Autopilot manufacturers said that the autopilot system is to assist drivers, and they must always pay attention and be prepared to take over the vehicle. The National Transportation Safety Board refused to blame anyone for the accident.
From 2016 to 2018, MIT researchers conducted an online survey called the "Moral Machine experiment" to enable testers to choose how self-driving cars should act when accidents occur in different scenarios. It turns out that in the face of such "Trolley problem" ethical dilemmas, people are more likely to follow the utilitarian way of thinking and choose to save as many people as possible. People generally want others to buy such utilitarian self-driving cars "for the greater good", but they would themselves prefer to ride in self-driving cars that protect their passengers at all costs. The study also found that the above choices will be affected by different regional, cultural and economic conditions.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
In 2019 OpenAI has announced and demonstrated a writing software (the GPT-2 model) that only needs small language samples to generate realistic fake stories. "These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns."
A 65-year-old black man from Chicago, the United States, was charged with shooting, without witnesses, weapons, and motives for killing. The police arrested him and imprisoned him for 11 months based on evidence provided by the AI gun sound location system ShotSpotter. Later, the judge found that there was insufficient evidence and acquitted him.
Chicago Police were responding to a ShotSpotter alert when they rushed to the Little Village block where they found Adam Toledo. Police shot and killed the 13-year-old after he ran from officers. Police and prosecutors said ShotSpotter recorded 21-year-old Ruben Roman firing a gun at about 2:30 a.m. on March 29, right before the fatal chase.
California Gov. Gavin Newsom (D) signed a bill Wednesday that would block Amazon and other companies from punishing warehouse workers who fail to meet certain performance metrics for taking rest or meal breaks. The law will also force companies like Amazon to make these performance algorithms more transparent, disclosing quotas to both workers and regulators. Supporters of the new law have presented it as a breakthrough against algorithmic monitoring of workers generally.
The illegal collection of facial information by retail stores was exposed by 2021 3.15 Gala in China. Stores of American bathroom product maker Kohler, automaker BMW, and Italian apparel company Max Mara were found to have installed surveillance cameras that collect visitors' facial data without their consent, which is in violation of regulations on personal data collection. The cameras illegally identified customers and logged their personal information and shopping habits. The companies that made these surveillance cameras, including Ovopark, Ulucu, and Reconova Technologies, were also named.
Concerns have been seen on Chinese social media, where users have complained about keyboard apps' possible misuse of their personal information or messaging history. Those apps are suspected of secretly recording and analyzing the users' input history, and selling it to advertisers or even more nefarious data collectors. China's Cyberspace Administration responded by issuing rectification requirements for violations in the collection of personal information of suspected apps and urged their providers to rectify.
When a group of researchers investigated Xiushui Street shopping mall in Beijing, Joy City in Xidan and Yintai in77 shopping mall in Hangzhou equipped with face recognition system, they found that even though these shopping malls brushed customers’ faces and tracked their consumption trajectory, none of them informed customers and obtained their consent, and customers did not know that they were brushed or their whereabouts were recorded.
In Zhengzhou, Henan province in China, Mr. Chen reported that he could not enter and leave the community normally for two years and could only follow other owners to go home. The main reason was that the community required facial recognition to enter, and he was worried that his information would be leaked. Without registering his face to the system, this caused him the great inconvenience of going home.
The National Computer Virus Emergency Response Center in China recently discovered through Internet monitoring that 12 shopping apps have privacy violations, violating the relevant provisions of the "Network Security Law" and "Personal Information Protection Law", and are suspected of collecting personal information beyond the scope.
The Scatter Lab from South Korea developed an Artificial Intelligence chatbot named Iruda, which was launched on Dec. 23, 2020, and is identified as a 20-year-old female college student. However, controversy soon spread over hate speech the chatbot made towards sexual minorities and people with a disability. The chatbot was also found to have revealed names and addresses of people in certain conversations, according to local news reports. Finally, the developer had to close the service amid the controversy.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In November 2019, China's social media went into overdrive after pictures emerged showing students wearing BrainCo Focus headbands at a primary school in Jinhua, east China's Zhejiang province, with many users expressing concerns that the product would violate the privacy of students, with many doubtful that the bands would really improve learning efficiency. Responding to public controversy, the local education bureau had suspended the use of the device.
Google’s "Project Nightingale " secretly collected the personal health data of millions of Americans and reported the data anonymously. Google and Ascension have released statements in the wake of the disclosure of Project Nightingale, insisting it conforms with HIPAA and all federal health laws. They said that patient data was protected.The anonymous reporter, as a staff member of the program, expressed concerns about privacy.
A former UChicago Medicine patient is suing the health system over its sharing thousands of medical records with Google, claiming the health system did not properly de-identify patients' data, and arguing that UChicago Medicine did not notify patients or gain their consent before disclosing medical records to Google.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
Admiral, a British insurance company, conducts reviews based on users' Facebook dynamics in the past 6 months. If the results show that you are a good driver, you can enjoy a discount on insurance premiums. If you are not a good driver, the fare will increase. Admiral's data analyst explained that this technology analyzes the language customers use on Facebook. For example, the heavy use of exclamation points may indicate overconfidence, short sentences indicate very organized, and there are many specific plans with friends. Show decisive. This means that using too many exclamation points or vague language will be judged by the company as a poor driver. Facebook issued a warning to the insurance company, saying that the company’s plan to use social platforms to sell insurance violated the platform’s policies and regulations. People think that insurance companies use Facebook data to publish rates, which violates privacy and also shows bias.
The illegal collection of facial information by retail stores was exposed by 2021 3.15 Gala in China. Stores of American bathroom product maker Kohler, automaker BMW, and Italian apparel company Max Mara were found to have installed surveillance cameras that collect visitors' facial data without their consent, which is in violation of regulations on personal data collection. The cameras illegally identified customers and logged their personal information and shopping habits. The companies that made these surveillance cameras, including Ovopark, Ulucu, and Reconova Technologies, were also named.
Concerns have been seen on Chinese social media, where users have complained about keyboard apps' possible misuse of their personal information or messaging history. Those apps are suspected of secretly recording and analyzing the users' input history, and selling it to advertisers or even more nefarious data collectors. China's Cyberspace Administration responded by issuing rectification requirements for violations in the collection of personal information of suspected apps and urged their providers to rectify.
When a group of researchers investigated Xiushui Street shopping mall in Beijing, Joy City in Xidan and Yintai in77 shopping mall in Hangzhou equipped with face recognition system, they found that even though these shopping malls brushed customers’ faces and tracked their consumption trajectory, none of them informed customers and obtained their consent, and customers did not know that they were brushed or their whereabouts were recorded.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
Google’s "Project Nightingale " secretly collected the personal health data of millions of Americans and reported the data anonymously. Google and Ascension have released statements in the wake of the disclosure of Project Nightingale, insisting it conforms with HIPAA and all federal health laws. They said that patient data was protected.The anonymous reporter, as a staff member of the program, expressed concerns about privacy.
A former UChicago Medicine patient is suing the health system over its sharing thousands of medical records with Google, claiming the health system did not properly de-identify patients' data, and arguing that UChicago Medicine did not notify patients or gain their consent before disclosing medical records to Google.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
The illegal collection of facial information by retail stores was exposed by 2021 3.15 Gala in China. Stores of American bathroom product maker Kohler, automaker BMW, and Italian apparel company Max Mara were found to have installed surveillance cameras that collect visitors' facial data without their consent, which is in violation of regulations on personal data collection. The cameras illegally identified customers and logged their personal information and shopping habits. The companies that made these surveillance cameras, including Ovopark, Ulucu, and Reconova Technologies, were also named.
Concerns have been seen on Chinese social media, where users have complained about keyboard apps' possible misuse of their personal information or messaging history. Those apps are suspected of secretly recording and analyzing the users' input history, and selling it to advertisers or even more nefarious data collectors. China's Cyberspace Administration responded by issuing rectification requirements for violations in the collection of personal information of suspected apps and urged their providers to rectify.
When a group of researchers investigated Xiushui Street shopping mall in Beijing, Joy City in Xidan and Yintai in77 shopping mall in Hangzhou equipped with face recognition system, they found that even though these shopping malls brushed customers’ faces and tracked their consumption trajectory, none of them informed customers and obtained their consent, and customers did not know that they were brushed or their whereabouts were recorded.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
A former UChicago Medicine patient is suing the health system over its sharing thousands of medical records with Google, claiming the health system did not properly de-identify patients' data, and arguing that UChicago Medicine did not notify patients or gain their consent before disclosing medical records to Google.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
In August 2019, researchers found loopholes in the security tools provided by Korean company Suprema. Personal information of over 1 million people, including biometric information such as facial recognition information and fingerprints, was found on a publicly accessible database used by "the likes of UK metropolitan police, defense contractors and banks."
In February 2019, SenseNets, a facial recognition and security software company in Shenzhen, was identified by security experts as having a serious data leak from an unprotected database, including over 2.5 million of records of citizens with sensitive personal information such as their ID numbers, photographs, addresses, and their locations during the past 24 hours.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
About 100 drones lost control and crashed into a building during a show in Southwest China's Chongqing Municipality on Monday night. A person familiar with the matter later disclosed that a crash in the mainframe control led to the incident, in which up to 100 drones lost control and malfunctioned. Although there were no injuries, the incident resulted in huge economic losses for the show designers.
Predictive tools developed by electronic health record giant Epic Systems are meant to help providers deliver better patient care. However, several of the company's AI algorithms are delivering inaccurate information to hospitals when it comes to seriously ill patients, a STAT investigation revealed. Research shows that the system failed to identify 67 percent of the patients with sepsis; of those patients with sepsis alerts, 88 percent did not have sepsis.
An analysis released Monday from the MacArthur Justice Center at Northwestern University’s School of Law concludes ShotSpotter is too unreliable for routine use. Officers responded to 46,743 ShotSpotter alerts July 2019-April 14, 2021. Only 5,114 of the alerts — about 11 percent — resulted in officers filing a report “likely involving a gun,” according to the study’s analysis of records obtained from city’s Office of Emergency Management and Communications.
With the stabilization of Covid-19, the real estate market in the United States is rapidly heating up. The price increase over the same period quickly soared from 5% to more than 10%, the highest in August 2021 and even reached 19.8%. Zillow's Zestimate model did not respond well to this change in the market. Fluctuations in house prices caused the model to be off track. Many real estate transactions were upside down. They were expensive when they were bought, but cheaper if they were refurbished. In Phoenix, more than 90% (93%) of the listing price of Zillow's refurbished houses were lower than the company's purchase price. This mistake not only made Zillow lose money, but also made Zillow hold too much inventory. The combined loss in the third and fourth quarters is expected to exceed US$550 million. The company plans to lay off 2,000 employees.
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
In 2015, the henn na hotel in Japan opened in 2015. All the employees of the hotel are robots, including the front desk, cleaners, porters and housekeepers. However, the hotel has laid off half its 243 robots after they created more problems than they could solve, as first reported by The Wall Street Journal. And in the end, a lot of the work had to be left to humans anyway, especially when it came to asking more complex questions. It seems we’re still a little ways off from a completely automated hotel.
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
The Ningbo Transportation Department in China deployed smart cameras using facial recognition technology at intersections to detect and identify people crossing the road indiscriminately. Some of the names and faces of these people will be posted on public screens. But it mistakenly "identified" Dong Mingzhu's advertisement on the bus body as a real person running a red light. This error quickly spread to all major social media in China. Local police admit mistake and have upgraded system to prevent further errors.
A robot "Fabio" is set up in a supermarket in Edinburgh, UK to serve customers. The robot can point out the location of hundreds of commodities through a "personal customization" program, but was rejected for failing to provide effective advice. Fabio failed to help customers, telling them beer could be found “in the alcohol section,” rather than directing customers to the location of the beer. He was soon demoted to offer food samples to customers, but failed to compete with his fellow human employees.
The Los Angeles Times reported on a 6.8 earthquake that struck Santa Barbara at 4:51pm, which might be surprising to the people of Santa Barbara who didn’t feel anything. The earthquake actually happened in 1925. The “reporter” who wrote the news article about the 6.8 quake was actually a robot. The newspaper’s algorithm, called Quakebot, scrapes data from the US Geological Survey’s website. A USGS staffer at Caltech mistakenly sent out the alert when updating historical earthquake data to make it more precise.
In February 2021, the Nantong Public Security Bureau in Jiangsu, China, has "uncovered a new type of cybercrime that used the "face-changing" software to commit fraud. The criminal gang used a variety of mobile phone software to forge faces, passed the WeChat recognition and authentication cancellation mechanism, and "resurrected" several Wechat accounts that are restricted from logging in due to violations of regulations, which helped fraud gangs use these Wechat accounts to commit fraud.
The latest research shared by Tencent Suzaku Lab show that the combination of VoIP phone hijacking and AI voice simulation technology will bring huge potential risks. Different from the previous scripted telecommunications fraud, this new technology can achieve full-link forgery from phone numbers to sound tones, and attackers can use vulnerabilities to hijack VoIP phones, realize the dialing of fake phones, and generate the voices of specific characters based on deep forgery AI voice changing technology for fraud.
Facebook AI has released TextStyleBrush, an AI research project that copies the style of text in a photograph, based on just a single word. This means that the user can edit and replace text in imagery, and the tool can replicate both handwritten and typographic compositions and bring them into real-world scenes. Researchers hope to open the dialogue around detecting misuse of this sort of technology, “such as deepfake text attacks – a critical, emerging challenge in the AI field.”
On June 7, 2021, a student in Wuhan, Central China's Hubei Province, was disqualified for using a mobile phone to search for answers during China's national college entrance exam, or gaokao. The student cheated by taking and uploading pictures of part of the test paper onto an online education APP where AI could use the photo to help search for answers to questions in its database.
CCTV News demonstrated the technology of using sample pictures to generate dynamic fake videos in real time. Making movements such as opening the mouth and shaking the head in the video can deceive the facial recognition system.
The Korea Baduk Association took the punitive measure against Kim Eun-ji, a2-dan professional Go player after Kim admitted she was assisted by an AI during a Go competition of cyberORO, which was held on Sept. 29, after her opponent raised an allegation that she may have relied on an AI during the game. Kim won over Lee Yeong-ku, a 9-dan professional Go player and a member of the national Go team, which shocked many because it defied expectations.
Researchers have discovered a “deepfake ecosystem” on the messaging app Telegram centered around bots that generate fake nudes on request. Users interacting with these bots say they’re mainly creating nudes of women they know using images taken from social media, which they then share and trade with one another in various Telegram channels.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
IBM Research developed DeepLocker in 2018 "to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware." "This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
Researchers from UCAS recently present a new method to covertly and evasively deliver malware through a neural network model. Experiments show that 36.9MB of malware can be embedded in a 178MB-AlexNet model within 1% accuracy loss, and no suspicion is raised by anti-virus engines in VirusTotal, which verifies the feasibility of this method. The research shows that with the widespread application of artificial intelligence, utilizing neural networks for attacks becomes a forwarding trend.
In February 2021, the Nantong Public Security Bureau in Jiangsu, China, has "uncovered a new type of cybercrime that used the "face-changing" software to commit fraud. The criminal gang used a variety of mobile phone software to forge faces, passed the WeChat recognition and authentication cancellation mechanism, and "resurrected" several Wechat accounts that are restricted from logging in due to violations of regulations, which helped fraud gangs use these Wechat accounts to commit fraud.
The latest research shared by Tencent Suzaku Lab show that the combination of VoIP phone hijacking and AI voice simulation technology will bring huge potential risks. Different from the previous scripted telecommunications fraud, this new technology can achieve full-link forgery from phone numbers to sound tones, and attackers can use vulnerabilities to hijack VoIP phones, realize the dialing of fake phones, and generate the voices of specific characters based on deep forgery AI voice changing technology for fraud.
Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes — in short, pretty much what a driver pressing various buttons on the console can do.
A research team from Tsinghua University proposed a method for physically attacking infrared recognition systems based on small light bulbs. The team's demonstration of the effect of the attack showed that the person holding the small light bulb board successfully evaded the detection of the detector, while the person holding the blank board and carrying nothing was detected by the detector.
CCTV News demonstrated the technology of using sample pictures to generate dynamic fake videos in real time. Making movements such as opening the mouth and shaking the head in the video can deceive the facial recognition system.
GitHub and Open AI have worked together to launch an AI tool called "GitHub Copilot". Copilot can automatically complete the code according to the context, including docstrings, comments, function names, and codes. As long as the programmer gives certain hints, this AI tool can complete a complete function. Programmers found that Copilot is not perfect, and there are still many flaws. Some of the output of the code by Copilot have problems such as privacy leakage and security risks. In a study, NYU researchers produced 89 different scenarios wherein Copilot had to finish incomplete code. Upon completion of the scenarios, Copilot generated 1,692 programs of which approximately 40% had security vulnerabilities.
A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
A study from Harvard Medical School in 2019 demonstrated the feasibility of different forms of adversarial attacks on medical machine learning. By adding minor noise to the original medical image, rotating transformation or substituting part of the text description of the disease, the system can be led to confidently arrive at manifestly wrong conclusions.
In August 2019 some white hat researchers proposed a novel easily reproducible technique called “AdvHat,” which employs the rectangular paper stickers produced by a common color printer and put it on the hat. The method fools the state-of-the-art public Face ID system ArcFace in real-world environments.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
A 2018 research has shown that GAN-generated Deepfakes videos are challenging for facial recognition systems, and such a challenge will be even greater when considering the further development of face-swapping technology.
In 2017, a group of researchers showed that it's possible to trick visual classification algorithms by making slight alterations in the physical world. "A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time." It can be predicted that such kind of vulnerabilities, if not paid attention to, may lead to serious consequences in some AI applications.
Researchers from cybersecurity company Bkav in Vietnam created their mask by 3D printing a mould and attaching some 2D images of the enrolled user's face. They then added "some special processing on the cheeks and around the face, where there are large skin areas, to fool the AI of Face ID." The mask is said to cost less than $150 to make.
Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes — in short, pretty much what a driver pressing various buttons on the console can do.
GitHub and Open AI have worked together to launch an AI tool called "GitHub Copilot". Copilot can automatically complete the code according to the context, including docstrings, comments, function names, and codes. As long as the programmer gives certain hints, this AI tool can complete a complete function. Programmers found that Copilot is not perfect, and there are still many flaws. Some of the output of the code by Copilot have problems such as privacy leakage and security risks. In a study, NYU researchers produced 89 different scenarios wherein Copilot had to finish incomplete code. Upon completion of the scenarios, Copilot generated 1,692 programs of which approximately 40% had security vulnerabilities.
A research team from Tsinghua University proposed a method for physically attacking infrared recognition systems based on small light bulbs. The team's demonstration of the effect of the attack showed that the person holding the small light bulb board successfully evaded the detection of the detector, while the person holding the blank board and carrying nothing was detected by the detector.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
About 100 drones lost control and crashed into a building during a show in Southwest China's Chongqing Municipality on Monday night. A person familiar with the matter later disclosed that a crash in the mainframe control led to the incident, in which up to 100 drones lost control and malfunctioned. Although there were no injuries, the incident resulted in huge economic losses for the show designers.
The National Highway Traffic Safety Administration (NHTSA) has opened 23 investigations into crashes of Tesla vehicles.The Autopilot feature was operating in at least three Tesla vehicles involved in fatal U.S. crashes since 2016.
On December 25, 2020, the shopping guide robot in Fuzhou Zhongfang Marlboro Mall, which is famous for its "smart business district", fell off the escalator and knocked over passengers. The person in charge of the mall stated that on-site monitoring showed that the accident was not operated by humans. The robot walked to the escalator by itself and caused the accident. The robot has been stopped.
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.
A security robot at the Stanford Shopping Center in Palo Alto hit and ran over a small boy, according to his parents. Knightscope Inc. has offered a public apology for the incident and has since recalled the robots from the Palo Alto mall.
About 100 drones lost control and crashed into a building during a show in Southwest China's Chongqing Municipality on Monday night. A person familiar with the matter later disclosed that a crash in the mainframe control led to the incident, in which up to 100 drones lost control and malfunctioned. Although there were no injuries, the incident resulted in huge economic losses for the show designers.
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.