Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock’s Norman Bates, it does not have an optimistic view of the world.
When a “normal” algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: “A group of birds sitting on top of a tree branch.”
Norman sees a man being electrocuted.
And where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.
The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view.
The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.
Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.
These abstract images are traditionally used by psychologists to help assess the state of a patient’s mind, in particular whether they perceive the world in a negative or positive light.
Norman’s view was unremittingly bleak – it saw dead bodies, blood and destruction in every image.
Alongside Norman, another AI was trained on more normal images of cats, birds and people.
It saw far more cheerful images in the same abstract blots.
The fact that Norman’s responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT’s Media Lab which developed Norman.
“Data matters more than the algorithm.
“It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”
Artificial intelligence is all around us these days – Google recently showed off AI making a phone call with a voice virtually indistinguishable from a human one, while fellow Alphabet firm Deepmind has made algorithms that can teach themselves to play complex games.
And AI is already being deployed across a wide variety of industries, from personal digital assistants, email filtering, search, fraud prevention, voice and facial recognition and content classification.
It can generate news, create new levels in video games, act as a customer service agent, analyse financial and medical reports and offer insights into how data centres can save energy.
But if the experiment with Norman proves anything it is that AI trained on bad data can itself turn bad.
Norman is biased towards death and destruction because that is all it knows and AI in real-life situations can be equally biased if it is trained on flawed data.
In May last year, a report claimed that an AI-generated computer program used by a US court for risk assessment was biased against black prisoners.
The program flagged that black people were twice as likely as white people to reoffend, as a result of the flawed information that it was learning from.
Predictive policing algorithms used in the US were also spotted as being similarly biased, as a result of the historical crime data on which they were trained.
Sometimes the data that AI “learns” from comes from humans intent on mischief-making so when Microsoft’s chatbat Tay was released on Twitter in 2016, the bot quickly proved a hit with racists and trolls who taught it to defend white supremacists, call for genocide and express a fondness for Hitler.
Norman, it seems, is not alone when it comes to easily suggestible AI.
And AI hasn’t stopped at racism.
One study showed that software trained on Google News became sexist as a result of the data it was learning from. When asked to complete the statement, “Man is to computer programmer as woman is to X”, the software replied ‘homemaker”.
Dr Joanna Bryson, from the University of Bath’s department of computer science said that the issue of sexist AI could be down to the fact that a lot of machines are programmed by “white, single guys from California” and can be addressed, at least partially, by diversifying the workforce.
She told the BBC it should come as no surprise that machines are picking up the opinions of the people who are training them.
“When we train machines by choosing our culture, we necessarily transfer our own biases,” she said.
“There is no mathematical way to create fairness. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities.”
What she worries about is the idea that some programmers would deliberately choose to hard-bake badness or bias into machines.
To stop this, the process of creating AI needs more oversight and greater transparency, she thinks.
Prof Rahwan said his experiment with Norman proved that “engineers have to find a way of balancing data in some way,” but, he acknowledges the ever-expanding and important world of machine learning cannot be left to the programmers alone.
“There is a growing belief that machine behaviour can be something you can study in the same way as you study human behaviour,” he said.
This new era of “AI psychology” would take the form of regular audits of the systems being developed, rather like those that exist in the banking world already, he said.
Microsoft’s chief envisioning officer Dave Coplin thinks Norman is a great way to start an important conversation with the public and businesses who are coming to rely on AI more and more.
It must start, he said, with “a basic understanding of how these things work”.
“We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right,” he said.
“When I see an answer from an algorithm, I need to know who made that algorithm,” he added.
“For example, if I use a tea-making algorithm made in North America then I know I am going to get a splash of milk in some lukewarm water.”
From bad tea to dark thoughts about pictures, AI still has a lot to learn but Mr Coplin remains hopeful that, as algorithms become embedded in everything we do, humans will get better at spotting and eliminating bias in the data that feeds them.
Visa says its service is “close to normal” again following a system failure which left customers across Europe unable to make some purchases.
The company apologised and said it had no reason to believe the hardware failure was down to “any unauthorised access or malicious event”.
Its statement came five hours after it had initially acknowledged the problem.
Shoppers had reported being stuck in queues as Visa transactions were unable to be processed.
Payment processing through Visa’s systems accounts for £1 in £3 of all UK spending, the company said.
Jay Curtis, from Swansea, had two cards declined in B&Q when he tried to pay for £240 worth of goods.
“My card just wouldn’t go through,” the 32-year-old told the BBC.
“I didn’t have cash on me so I had to drive all the way home.”
Labour MP Angela Rayner seems to be among those affected, tweeting that she had to leave her local petrol station without paying.
Elle Gibbs-Murray, from Bridgend, said she was stuck in traffic on the Severn Bridge for 45 minutes as drivers were unable to pay the toll by card.
Adam, from Manchester, is a on a canal boat holiday with his girlfriend, Rach, and he was unable to use his card.
The 26-year-old said: “We have spent all day boating to moor up at a riverside pub in Kidlington for a birthday meal only to find the visa payments are not working. Having only £20 between us we have had to opt for a birthday drink instead.
“[There is] no cash point for miles around and no car as we are on the canal boat.”
In Berlin’s Alexanderplatz, customers at Primark complained of having to queue for 20 minutes to pay and staff there could not explain the reasons why transactions were failing.
Deborah Elder, from Glasgow, was unable to pay her restaurant bill while she was waiting at Frankfurt airport to fly back to Toulouse.
She said: “I was so embarrassed. I gave the waiter the 14 euros I had left.
“I’m worried I won’t be able to get home when I land in Toulouse as I have no cash for a taxi.”
Supermarket Tesco said chip and pin payments were not affected, but contactless payments were.
Sainsbury’s also said it had experienced problems.
We are aware some customers are experiencing Visa debit card issues. This is impacting multiple banks across Europe. We will update when we know more. Cash withdrawals can be made at any BOI ATM.
— Bank of Ireland (@talktoBOI) June 1, 2018
End of Twitter post by @talktoBOI
Consumer advocacy group Which? advised people to keep evidence of extra expenses incurred in order to claim them back in the future.
“Visa and the banks need to ensure no-one is left out of pocket due to this outage,” said Alex Neill, Which? managing director of home products and services.
A Visa spokesman said the system failure had “impacted customers across Europe” and the company apologised for falling “well short” of its reliability goal.
Indiegogo has said that it is willing to extend the deadline it gave to a project attempting to make a handheld version of a classic British computer.
But the crowdfunding site says that the team behind the ZX Spectrum Vega+ has yet to meet its conditions.
In February, Indiegogo threatened to appoint debt collectors if the campaign had not fulfilled its commitments by the end of May.
The project’s chief told the BBC he was “still determined to deliver”.
Dr David Levy added that he believed many backers “still are fully supportive of our finishing the project”.
The campaign originally pledged to send out the console in the summer of 2016.
The company he chairs, Retro Computers Ltd (RCL), has issued an update to backers saying it now intends to deliver the first consoles by 15 June.
He also told the BBC that “Indiegogo has extended the date” to mid-June.
However, the US firm explained that the situation was more complicated.
“We’ve taken several steps to protect our community, including demanding refunds for any backer that has requested one and banning the campaign owners from launching any other project on our platform,” explained a spokesman.
“We have also begun the process of sending the campaign owners to collections, which will commence unless the campaign delivers on the promises it made.
“More specifically, yesterday, we sent very clear requirements, which have not yet been agreed to by the campaign owners, despite them posting a campaign update earlier today.”
The conditions required of RCL are that it:
Indiegogo also disclosed that RCL had suggested it now planned to deliver the handheld consoles without 1,000 games pre-installed as originally promised.
“If these requirements are not agreed to… we will continue with the process of sending the project to collections, which has already begun, and we will immediately notify backers, as previously discussed,” the Indiegogo spokesman added.
Sky has been asked for comment.
The Vega+ campaign raised a total of £512,790 from more than 4,700 people on Indiegogo before the US firm blocked it from accepting more funds in March 2017.
This was a highly unusual step for the fundraising service, which normally allows projects to decide when they want to stop accepting money.
According to RCL’s most recently filed accounts, it had £433,008 of assets at the end of March 2017.
Should the company be forced to refund the cash, it would mark one of the highest-profile crowdfunding failures to date.
Its use of the iconic Sinclair brand has meant the project has been widely followed by both the gaming and mainstream press.
However, debt collection agencies lack the legal powers to force an immediate repayment.
Chinese retail giant Alibaba has unveiled a new automated vehicle which it says is easy to mass-produce and could serve a number of functions.
These could include delivery courier or automated coffee vendor, it said.
The Cainiao G Plus can travel at up to nine miles per hour, reports The Verge news site.
It was unveiled at a conference where Alibaba founder Jack Ma announced a 100bn yuan (£11.6bn; $15.5bn) investment in smart logistics.
This includes devices such as warehouse robots as well as delivery aids.
AI expert and author Calum Chace described the G Plus as resembling the “ugly big brother” of a delivery bot developed by UK firm Starship Technologies.
“Starship has been working on this project for several years, so the Alibaba project looks to be behind as well as ugly,” he said.
“But that won’t matter. Anything to do with artificial intelligence is a high priority for China, which has set itself the target of overtaking the US as world leader in AI by 2030.”
China benefits from having free access to large amounts of data, essential for training algorithms, he added – potentially a drawback in Europe since the introduction of GDPR legislation, designed to protect privacy.
“Don’t bet against the Chinese pulling ahead in any AI-related competition, be it self-driving cars, facial recognition, or delivery bots,” Mr Chace said.
The G Plus vehicle is fitted with solid state Lidar – the laser sensors which form an important part of the system autonomous vehicles use to be aware of their surroundings.
Solid state Lidar is more compact, cheaper and easier to manufacture than the traditional system, which involves spinning multiple lasers in circles to help build up a 360-degree image of what surrounds the vehicle.
There are various developers creating their own versions but essentially it uses fewer lasers and a tiny swinging mirror.
Last month BMW revealed that its autonomous vehicles will be fitted with solid state Lidar when they launch in 2021.
Google has addressed an unusual glitch in its Search and Assistant apps that made SMS text messages appear when specific search terms were entered.
Entering phrases including “the1975..com” and “izela viagens” into the apps made the phone display a person’s text messages.
The glitch was discovered by an Android user on Reddit, who described it as “the weirdest glitch I have come by”.
Google told the BBC it was a “language detection bug” that was being fixed.
Certain phrases were “erroneously interpreted as a request to view recent text messages”.
The company said the app could only display text messages if it had been given permission to do so, and it had implemented a fix that would be distributed within a few days.
On Thursday, Google was criticised after listing “Nazism” as one of the ideologies of the California Republican political movement.
Republican politicians fumed on social media that it was an example of conservative voices being suppressed or misrepresented by technology giants.
However, Google said the information was pulled in from a Wikipedia page that had been “vandalised”.
“We have systems in place that catch vandalism before it impacts search results, but occasionally errors get through, and that’s what happened here,” the company said.
US teenagers are ditching Facebook in favour of platforms such as YouTube, Instagram and Snapchat, a study says.
Only 51% use Facebook, which is a 20 percentage point drop since 2015, when the US-based Pew Research Center last surveyed teens’ social media habits.
Most of those aged 13 to 17 own or have access to a smartphone, with 45% online on a near-constant basis.
YouTube has stolen Facebook’s former dominance over teens, with 85% of them preferring the video-sharing platform.
Second and third top social media services among teens are now Instagram at 72% and Snapchat at 69%.
The numbers of teens who use Twitter (32%) and Tumblr (14%) are largely unchanged compared to the results found in 2015.
While Facebook may have lost its reign among the teenage demographic to Google-owned YouTube, it has owned the rising favourite Instagram, a photo and video-sharing networking service, since 2012.
The Pew study, which surveyed nearly 750 teens in one month earlier this year, found that the increase in smartphone ownership played a huge part in teen life. Today’s 95% is a 22-point increase from the 73% of teens three years ago.
It also found, consistent with previous studies, that while most teens used the same social media platforms as their peers, low-income teens were more likely to prefer Facebook than teens from a higher-income household.
The Pew survey could not find clear consensus among teens about the effects of social media on their lives.
Almost a third described the effect as mostly positive, and a quarter saying mostly negative. The largest bloc, 45%, said that the effect was neither positive nor negative.
Share this with
These are external links and will open in a new window
Share this with Email
Share this with Facebook
Share this with Messenger
Share this with Messenger
Share this with Twitter
Share this with Pinterest
Share this with WhatsApp
Share this with LinkedIn
Copy this link
These are external links and will open in a new window
A chat app introduced by an Indian yoga guru and dubbed a “WhatsApp killer”, has been removed from app stores amid a furore over security flaws.
Baba Ramdev’s Patanjali Products launched Kimbho on Thursday, calling it a “homegrown” rival to other chat apps.
But hours after its “launch”, experts pointed out the app was not secure and its user data could be easily accessed.
Patanjali told the BBC the app had no flaws and they had introduced it for a day to gauge initial public interest.
SK Tijarawala, spokesperson of Patanjali Products, said “the Kimbho will show the world that India can be the leader in global technologies”.
“We released the app just for a day to understand how public would react. The response has been phenomenal. We will properly launch the app in the near future and then I will be happy to answer any security related questions,” he said.
The app marked the entrepreneurial guru’s first venture into the tech industry.
His firm Patanjali is already a vast business empire that sells a wide range of products from shampoos and cereal to skin creams and instant noodles.
But a cyber security researcher, who tweets under the pseudonym Elliot Alderson, pointed out that this time around Patanjali may have rushed into launching the app.
Hi @KimbhoApp before trying to compete #WhatsApp, you can try to secure your app. It’s possible to choose a security code between 0001 and 9999 and send it to the number of your choice #kimbhoApppic.twitter.com/YQqK8lfIeI
— Elliot Alderson (@fs0c131y) May 30, 2018
End of Twitter post by @fs0c131y
— Elliot Alderson (@fs0c131y) May 30, 2018
End of Twitter post 2 by @fs0c131y
Technology writer Prasanto K Roy told the BBC that “Kimbho is clearly a quick and rough rework of a chat app called Bolo Messenger, with some, but not all, references to Bolo replaced by Kimbho”.
“Most worryingly, the Kimbho app is storing data in easy-readable text, and the process to verify user identity (a text message containing a password) can be easily gamed a hacker – who can then read other users’ messages,” he said.
Alt News website, which is dedicated to busting fake news stories, reported that Patanjali “simply rebranded a messaging app called ‘Bolo Messenger’ that has been developed by a start-up in the US“.
“All the evidence on display clearly suggests that Baba Ramdev’s Patanjali has rebranded an already existing app and passed it off as the ‘Swadeshi’ [homegrown] Kimbho app,” the report said.
But Mr Tijarawala has rejected these claims.
“The app has been developed in-house at Patanjali and our engineers and developers worked on it. You will understand and see their efforts when we formally launch the app,” he said.
India, which is expected to have 500 million internet users by June, is already the biggest market for chat app WhatsApp.
Peer-to-peer “sharing economy” tech platforms have mushroomed in the last decade, but very few have rivalled Airbnb or Uber in size, largely because consumers have found them difficult to trust. So how are tech firms rising to this challenge?
When photographer Antonio Salvani, 36, was commissioned to do a wedding, he realised he didn’t have all kit he needed to do the blushing bride justice.
Normally he would have gone to a camera rental shop to get the extra equipment., but deposits “can be £250-1,000”, he says.
A friend told him about Fat Lama, a start-up peer-to-peer (P2P) platform that enables people to rent out stuff they own.
“For £60 I was able to hire two cameras and lenses,” says Mr Salvani, a property management consultant in London’s Mayfair as well as a photographer.
“It would’ve been 40-50% more expensive through the rental shop.”
This experience persuaded him to list all his camera equipment worth about £20,000 on the site, “as I only use about 30% of it at any one time”.
Money came easily and he hasn’t looked back since.
He made £4,000 in April, he says, and has been able to buy his mother a £30,000 flat in Macedonia with the proceeds of this flourishing sideline business.
But isn’t he concerned about his equipment getting broken or stolen?
“I was very worried at first renting out my expensive kit to people I don’t know. I couldn’t sleep!” Mr Salvani admits.
“But you’re covered by insurance – I’ve had 700 rentals and no breakages so far.”
One guy did fail to return a smoke machine, but Fat Lama reimbursed him the £50 cost “within a week”, he says.
He meets all his customers face-to-face first and thinks this helps establish a personal relationship, making fraud less likely. One client left a piece of expensive kit on a train but reimbursed Mr Salvani £750 without needing to go through the insurance process.
Fat Lama chief executive Chaz Englander acknowledges that the sharing economy idea has been around for at least a decade, but trust and insurance have been stumbling blocks for many start-ups.
“Insurance companies were very sceptical at first because the market was still too small and the risks too high, so we concentrated on tech to make our risk profiling of customers as accurate as possible,” he says.
This involves analysing 150 data points on customers, explains Mr Englander, from their browsing behaviour – searching across lots of different product categories could be suspicious – to which type of phone they use – messaging from an iPhone one minute then from an Android the next might be another indication of dodgy behaviour.
“Our risk modelling helped the insurers get on board – claims are capped at £25,000 per item,” he says. The cost is included in the 15% commission the firm charges both parties involved in the transaction.
Fat Lama, which has attracted 80,000 customers in London and is expanding in New York, is just one of thousands of firms around the world exploiting smartphone apps, cloud-based servers, GPS technology, eBay-style ratings, and flexible digital payment systems to facilitate what’s been dubbed the sharing economy – something of a misnomer.
For while there are organisations such as Freecycle dedicated to promoting the sharing of goods and services – without money changing hands – most P2P platforms take a commission.
The market has been growing exponentially. In the UK alone, sharing economy transactions could reach £140bn by 2025 says consultancy PwC, up from an estimated £7bn in 2015. Globally, this figure could top £2.6tn by some estimates.
The idea is that the stuff we own – houses, cars, camera equipment, our money – sits around doing nothing for most of the time when it could be earning us extra cash.
Younger generations in particular have embraced renting rather than buying, not only as a way to save money, but also as a way to live more sustainably in opposition to our throwaway culture.
In finance, companies like Funding Circle, GreenSky, and Lufax are matching lenders with borrowers in direct competition with those banks offering poor interest rates on savings.
In short-term property rental, Airbnb has blazed a trail, followed by the likes of HomeAway, HouseTrip and Tripping.com.
The taxi sector has been thoroughly shaken up by Uber and Lyft, while car rentals are being challenged by companies such as Turo, Getaround and easyCar Club.
But many start-ups have fallen by the wayside in this nascent market, unable to engender enough trust and confidence in such novel services, or reach scale quickly enough to survive the cut-throat competition.
“The biggest challenge for peer-to-peer brands is trust,” says Richard Laughton, chief executive of car sharing platform easyCar Club and chair of trade body Sharing Economy UK.
“People on both sides of a rental need to be confident that their assets will be looked after and their safety guaranteed.”
Technology is making the vetting of users easier, he argues, enabling techniques such as video verification and social media profile analysis to supplement the established rating systems.
And smart “internet of things” sensors could be “built seamlessly into the rental process to provide accurate feedback on how assets are being used,” he adds.
Kitemark schemes, such as Sharing Economy UK’s TrustSeal, also help to engender confidence, says Mr Laughton.
More Technology of Business
One P2P car sharing firm, Cube Intelligence, thinks the distributed ledger technology blockchain could eradicate the trust issue once and for all.
“Blockchain is set to reduce risk by facilitating a ‘trustless’ system in which personal data can be verified and payments can be transferred quickly and securely,” says Robert Cooke, the firm’s director of partnerships.
Smart locks will detect when a customer has arrived at the vehicle, meaning the owner doesn’t have to be present to hand over the keys, he says. He’s hoping the system could be extended to bicycles and motorbikes.
But for Turo, one of the world’s largest car-sharing platforms, the main challenges have not been insurance or trust – its insurer Liberty Mutual is an investor – but the threat of regulation lobbied for by car rental companies, says company spokesman Steve Webb.
Uber and Airbnb have had their fair share of regulatory issues, too, but if the main threats have switched from lack of trust to opposition from incumbent rivals, then the sharing economy must be doing something right.
The race to lead America’s self-driving car market moved up a gear with General Motors and Fiat Chrysler announcing major deals.
Japan’s SoftBank is putting $2.25bn (£1.7bn) into GM’s autonomous unit Cruise, one of the biggest single investments in self-driving technology.
And Google-owned Waymo is buying up to 62,000 Fiat Chrysler minivans for its autonomous fleet.
Meanwhile, Uber’s boss says it may work with Waymo on self-driving tech.
The SoftBank deal saw GM’s shares jump more than 10%, the biggest one-day gain since the company re-listed on Wall Street after its 2009 bankruptcy.
SoftBank will take a 19.6% stake in Cruise. The partnership values Cruise at $11.5bn, a triumph for GM which was criticised for over-paying when it bought the start-up two years ago for $1bn.
RBC Capital Markets analyst Joseph Spak said the deal confirmed that GM was one of the top contenders to deploy self-driving vehicles. “GM has a meaningful seat at the table,” he said.
GM chief executive Mary Barra said the company was still “on track” to begin deploying its Cruise vehicles in commercial ride-sharing fleets in 2019.
She said GM planned to launch its own ride hailing and delivery services business but could explore “other opportunities” with some of the companies that SoftBank has funded.
That is a reference to the money that SoftBank’s $100bn Vision Fund has invested in Uber, Didi, Ola and Grab.
Fiat Chrysler, America’s number three carmaker behind GM and Ford, also stepped up its self-driving efforts.
The company will begin delivering the first of its 62,000 Pacifica vans later this year.
Fiat Chrysler is also exploring the potential to put Waymo technology into a self-driving car it might add to its model line-up for consumers.
“Strategic partnerships, such as the one we have with Waymo, will help to drive innovative technology to the forefront,” Fiat chief executive Sergio Marchionne said.
That announcement came after the chief executive of Uber, Dara Khosrowshahi, said Waymo cars could be used by the ride-hailing firm.
It comes only months after the two companies were at legal loggerheads over a trade secrets dispute.
Uber suspended its own autonomous car testing in April after an accident that killed a woman pushing a bicycle in a street in Arizona.
Waymo’s chief executive, John Krafcik, has said the company’s own self-driving software is “robust” enough to avoid the sort of accident Uber suffered in Arizona.