for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
When AI misjudgment is not an accident
Anne says: Trick or treat?? It’s Halloween and this week I selected a disturbing article to keep in the spirit of things!
The issue of bias is widely recognised when discussing artificial intelligence (AI) and algorithms. How can we trust the algorithms to provide us with authentic data? Mostly the discussions have focused on unconscious bias, where the algorithms can be impacted by the individual’s beliefs and values – unknowingly. And then there’s cognitive bias, where we make flawed decisions based on various factors. For example, we may believe one person over another in a debate because their argument appears to be logical. In fact, it could be confirmation bias, their argument supports our existing opinion.
These types of biases, although of concern, can be managed by coding with a diverse team, using a number of data sources, monitoring results and adjusting datasets or algorithms.
But now, there’s intentional or deliberate bias. This is not unconscious, this is a deliberate interference with algorithms to intentionally create a bias. Think sophisticated cyber attacks, fake news and between organisations. I think we’re familiar with the impact of deliberate interference such as the Russian manipulation of Facebook during US elections. But how might this be used against companies?
Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.
There are more examples in the article but start considering the bigger picture between hostile governments that could have far-reaching impact.
The scariest part (and it’s not a Halloween trick or treat) is how simple the process could be. Algorithms could be fed bias data, algorithms could be programmed to amplify existing biases. This is like malicious viruses or malware on steroids – what is being labelled as “poisoned algorithms”.
What can we do to protect our algorithms? Currently, processes for managing unconscious bias include workforce diversity, expand access to diversified data, and build in algorithmic transparency. But is this enough?
The authors suggest that this will be a systemic challenge that will require constant review and further development to ensure we’re producing AI systems that benefit us – not exploit us!
Is there a market for AI art? This $600k painting suggests there is
Joel says: Just a few weeks ago I wrote about an AI program that was able to generate anime character artwork using AI systems. Now it seems similar technology is being used to generate traditional artworks too. And don’t think just because they’re generated by systems and algorithms that they’re cheap!
We know AI can learn to write like Shakespeare and compose pop songs but now, an original painting generated by AI has sold for more than 38 times its expected price. In a world-first event in New York last week, Portrait of Edmond de Belamy, a blurry picture of a man in a dark frock coat — was auctioned off by Christie’s for $610,000.
Software developer and self-described robot artist Jeremy Kraybill says there’s no doubt AI will shake up the art scene — the only question is how much.
“We’re going to see a lot more integration of these technologies in artistic expression,” he told Late Night Live.
“I think it’s an interesting time to be alive. I can’t wait to see what happens.”
But how is AI art made, you may ask?
This portrait was made in 2 stages. The first stage involved gathering data from 15,000 real portrait paintings throughout history. They were then analysed by a machine. In the second stage, known as ‘the discriminator’, the machine was “trained to tell the difference” between the robot-generated images and the human paintings.
The goal of the second stage was to reach a point where the discriminator was unable to tell the difference between an AI-generated artwork and a human painting. This means the AI artwork could be passed off as a unique human painting.
What’s interesting, is that Kraybill sees AI as “yet another tool in the bag of the artist,” rather than being a threat to creativity or creative artists.
“There was a huge debate around photography and art when it first came out. Did it put all painters out of jobs? No. And did it become part of the mainstream? Yes,” he said. He thinks AI will also create opportunities for a wider range of people to express themselves.
Read the full piece to find out more about the program that generated this art piece and to learn about the various ways AI is being used in the creative industry today.
Why dogs are great disease detectors
Whoopi says: Meet Freya, a Springer Spaniel. Freya’s pretty special, she’s been trained to sniff out people with malaria. And she and her colleagues have been 70% correct! That’s pretty impressive, when all she did was identify a pair of smelly socks!
In the UK there’s a team working with dogs like Freya and training to identify malaria. It’s pretty straightforward; as a dog, we have an amazing sense of smell. We’re attracted to new and interesting odours – it’s called neophilia. Just tell us which smell you want us to find, and leave the rest to us!
Of course using us to sniff out all sorts of things from bombs to drugs, to cancers isn’t new. But with the malaria detection, they’re hoping we could help at airports to identify people with the disease without blood tests and enable early intervention.
They reckon they’re going to be able to train robots (or bio-electronic noses) to do our work – but you know what… I’m sure most people would rather have a wet nose belonging to one of the canine team than some robot nose smelling their socks!
Bottled air anyone?
Helen says: Decades ago, when backpacking around Europe, I scoffed at the notion of buying bottled water, taking for granted our water quality back home. Needless to say, the market has proven strong and I have to admit to being a consumer. Could it be the same for clean air? The science isn’t in yet, and regulation is even further behind. Nevertheless, there seems to be a burgeoning market for packaged air. A couple of Canadians joked about the idea of retailing air and they now find themselves distributing Canadian mountain clean air globally.
Their market started in China but with 95% of the world’s population living in areas where air quality is poor at best, the market is expanding with bottles of clean air retailing for $32.00 a can or 20 cents a breath. Users take a breath now and then because doing this full time would be cost prohibitive. This makes the possibility of any positive impact on health spurious at best but with ‘pollution season’ just around the corner, growth seems set to continue.
Would you breathe bottled air?
Gartner’s 5 drivers for digital transformation
Jakkii says: Gartner has released it’s ContinuousNEXT approach for digital transformation, in which they identify 5 imperatives to enable and drive transformation.
- Digital twin
- Augmented intelligence
- Product management
The ‘digital twin’ concept piqued my interest. Gartner’s Helen Huntley explains:
“[Dynamic modelling with a digital twin organisation is] taking the roof off your work location and visually looking inside to see how all the dynamics are working in your organization and, via dynamic software modeling, putting that together in a virtual world”
Although it’s 2018 and it seems we’ve been talking about ‘digital transformation’ forever, it’s still a significant challenge for many organisations. This piece delves a little further into each of the five imperatives, so have a read through and let me know what you think. Do these five imperatives make sense to you with regard to how they might enable and drive digital transformation? I’d love to know why, or why not.
This week in social media
- Facebook is separating Workplace from the main Facebook site to appease business customers concerned about security
- Twitter is not the echo chamber we think it is
- From Silicon Valley elite to social media hate: The radicalization that led to Gab
- Twitter’s problem isn’t the like button
- Twitter now lets you report accounts that you suspect are bots
- Facebook is still growing at a slow but steady pace
- How Facebook failed to build a better Alexa (or Siri)
- Twitter tests home-screen button to switch tweet order
- Snapchat adds lenses to desktop cameras to boost awareness of lens tools
- Facebook launches local news feature in 10 Australian cities
- The ‘Stories’ product that Facebook copied from Snapchat is now Facebook’s future
Sydney Business Insights – The Future This Week Podcast
This week: casting the dead, boss forever, and things that vanish. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week:
Other stories we bring up: