Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive

Rude robots and the Theory of Mind

Anne says: More about robots and building on last week’s thought-provoking questions, this week’s articles demonstrate some of the current research and the impact. When you read these – just remember some of the concerns from last week (how we may emotionally engage – or not – with robots and what we need to think about) – I think both these articles raise more concerns while exploring the exciting sort of research that helps us understand ourselves at the same time as developing new technologies.

Article 1: How rude humanoid robots can mess with your head

This is interesting research – what if you humanoid robot gave you negative responses when you asked a question? How would you react? The research being conducted by a group in France is investigating how a robot’s attitude may affect your ability to do a task. (Perhaps not different to co-workers or managers?). It also extends to the influence of robots and how they may be able to influence children to make decisions.

The key finding:

“…how the development of advanced social robots is far outpacing our understanding of how they’re going to make us feel. What these studies suggest is that humanoid robots can manipulate us in complex ways. And scientists are just barely beginning to understand those dynamics…”

As previously highlighted – we need to understand this impact further before we find ourselves encouraging or limiting our behaviours in ways that are not intended – particularly children!

Article 2: How to make a robot use theory of mind

Meanwhile, other researchers are trying to create robots that can respond to our needs by anticipating what is we need. The opening example is useful to understand this further. Your in an elevator, you see your colleagues running along the corridor. You put your arm into the door opening to hold the lift for your colleagues to catch your lift. You knew what their behaviour was – you understood that their rapid movement indicated them trying to catch your lift, and you responded to hold the lift for them. Easy, everyday sort of reactions… unless you’re trying to program a robot with what is called predictive social skills!

The research being conducted at the University of New England is attempting to develop:

 “…understanding through “simulation theory of mind,” an approach to AI that lets robots internally simulate the anticipated needs and actions of people, things and other robots—and use the results (in conjunction with pre programmed instructions) to determine an appropriate response.”

This sounds almost unachievable to me – as it requires us to understand ourselves first. This is where the research into Theory of Mind – a term used to describe the:

“…ability to predict the actions of self and others by imagining ourselves in the position of something or someone else.”

This type of programming is not machine learning – this requires a simulation-based approach within an internal programmed model. They have had some limited success with simple tasks but while the theory of mind is not well understood in humans, it makes you realise that some of the hype around robots taking over our jobs is still a long way off.

The article describes further some of the intentions – what else they’re trying to develop and the impact.

Moral of the story: While a robot may be programmable to be rude or cheeky or encouraging, don’t expect them to hold the lift door open for you anytime soon!

Read Article 1: https://www.wired.com/story/how-rude-humanoid-robots-can-mess-with-your-head

Read Article 2: https://www.scientificamerican.com/article/how-to-make-a-robot-use-theory-of-mind/

What is artificial general intelligence?

Nat says: I know what you’re thinking – you’ve barely gotten your head around the whole “artificial intelligence” movement, and now you have “artificial general intelligence” to grapple with. As the latest AI buzz to hit the news this week, it seems fitting to discuss artificial general intelligence (AGI) in relation to the Gartner Hype Cycle; as the cycle’s 2018 predictions were also released this week. The hype cycle is an annual graph, accompanied by a report, made by the research firm Gartner who explore the media “hype” surrounding new technology, along with the progression other technologies have made in society over time.

As depicted in the graph, technology often follows five trends, which represents not only a technology’s maturity but its real-world and business application. As noted by the graph, the symbols next to each technology’s name represent when Gartner believe that technology will reach its plateau stage – that is, when adoption of the technology will actually occur and its benefits can be known. If we look at AGI, it is appearing on the Hype Cycle for the first time, and it is estimated that it will take more than ten years to move through the cycle stages. Some technologies have been known to fall off the graph and disappear into the abyss at the trough of disillusionment stage, so there’s no guarantee the plateau will ever be reached. But given that AGI is now part of the Hype Cycle, what is the shared article saying about it?

Alarmingly, and what I find depressing, is that a lot of rhetoric accompanies the explanation of AGI in the article. Described as a “super intelligent AI” capable of “meta” thinking and human levels of consciousness, the depiction falls victim for how we, unfortunately, have come to talk about our machines in modern times – as though they are God-like entities that can save ourselves from ourselves. I would actually encourage you to read the article and question its underlying assumptions. We tend to forget that it is we as humans who make technology. We program them. Machines themselves are not conscious, they are not self-aware, they have no idea that they “exist”, and they have no capacity to “know” anything. Everything the article talks about just re-affirms why Gartner has its “hype” cycle, because people create new machines and tell the world they will solve all the world’s problems, when we forget that our machines say more about ourselves as humans than they do anything else.

Readhttps://www.zdnet.com/article/what-is-artificial-general-intelligence/

Persuasive technology needs to be kept in check

Helen says: Earlier this month I commented on an article about human evolution in the digital era. This week I came across another article relating to the use of psychology in technology design. “Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfil their basic human drives — to be social or obtain goals — better than real-world alternatives. Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.”

The increasingly addictive nature of apps, their reach and the impact they are having on children is concerning, not only for parents but for society as a whole. Author of this article, Richard Freed, a child and adolescent psychologist, describes how persuasive technology is used to change behaviour, with profit being the key driver for its creators. Unprecedented levels of screen time, falling grades, increasing levels of depression and self-harm are some of the possible consequences of a rising addiction to technology.

Psychologists have a code of ethics that requires their work to benefit their patients, not harm them. On the other hand, companies using persuasive technology are not bound by any such code of ethics. Regulation in this field needs to be considered to address the growing exploitation inherent in persuasive design.

Readhttps://medium.com/@richardnfreed/the-tech-industrys-psychological-war-on-kids-c452870464ce

Scientists have found a new way to stimulate lucid dreams

Joel says: In case you’ve never heard of lucid dreams before, they are dreams where the dreamer themselves know they are dreaming. They are able to take control over the dream characters, narrative, and environment to create their own experience.

One issue is that they are quite a rare occurrence and are difficult to induce. Researchers have spent decades trying to understand lucid dreams, what causes them and testing various techniques to be able to create the lucid dreaming experience.

Scientists at the University of Wisconsin-Madison and the Lucidity Institute in Hawaii have figured out a more consistent way to create a lucid dreaming state, and it involves the use of drugs normally used to treat Alzheimer’s.

The drug used for the study, galantamine, is regularly used for Alzheimer’s as well as other nervous system disorders such as muscular dystrophy.

Without galantamine, using a placebo, 14 percent of users reported having lucid dreams. After a 4 milligrams dose of the drug that number rose to 27 percent.

Incredibly, after an 8mg dose of galantamine, 42 percent of participants reported having lucid dreams.

The findings of the study suggest that galantamine’s effectiveness might be “related to its effects on cholinergic receptor activity during REM sleep.”

For the study lucid dreams were classified as dreams that “were associated with significantly higher levels of recall, cognitive clarity, control, positive emotion, sensory vividness and self-reflection on one’s thoughts and feelings compared to non-lucid dreams.”

I think it would be cool to experience a lucid dream once although I’d prefer it not to be drug-induced – even though the study has reported minimal side effects in the patients that participated.

Readhttps://www.cnet.com/au/news/scientists-have-found-a-new-way-to-stimulate-lucid-dreams/

Kids connect with robot reading partners

Jakkii says: I’m also on the robot train this week with a nice example of a use case for robots: applying social learning theory to see if the usefulness of social learning holds when the ‘social’ comes from something artificial.

Kids learn better with a friend. They’re more enthusiastic and understand more if they dig into a subject with a companion. But what if that companion is artificial?

Researchers out of the University of Wisconsin-Madison wanted to explore this question, as the team involved believe robots will soon be a common fixture in our homes and wondered if they could also provide a social learning companion for the children in those homes to improve educational outcomes. They developed Minnie, programmed Minnie to be an interested listener, and designed a 2-week reading program for the children to engage with and follow.

They saw some positive and encouraging results:

The number of children who told researchers the robot has a personality or emotions increased more than fourfold over the two weeks readers spent with Minnie. The number reporting they were motivated to read also spiked — and surpassed a control group following a paper-based version of the reading program. And kids who read with Minnie said they felt like they understood and remembered more about the shared books.

This isn’t the only example of robots working with children for education – socially assistive robots aren’t really a new concept by now. For instance, there’s ‘Tega’, developed at MIT, is “a socially assistive robot designed to serve as a one-on-one peer learner in or outside of the classroom,” with which research shows both the possibility of personalised assistance through the robot’s learning, but also positive results with respect to learning – and also with the engagement and connection of students to Tega.

Ultimately, the University of Wisconsin-Madison researchers hope Minnie – or some version of Minnie – will be a fixture in homes capable of interacting and engaging with all members of the home.

They expect social learning functions could be part of a companion robot shared by a whole family, but acknowledge that future as fraught with design concerns over how to craft an engaging personality for users at different ages, protect family members’ privacy and maintain the hard-won trust of a child that will also see the robot having private interactions with parents.

What do you think – how would a robot companion offer value to your home and your family?

Readhttps://www.sciencedaily.com/releases/2018/08/180822141035.htm

Sydney Business Insights – The Future This Week Podcast

This week: Googling productivity, placing weight on calories, and changing stories. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week:

Google wants to help you focus and your boss to spy on you

The global food crisis may come in 2027

Two years of Instagram stories has altered the way we act and play

Other stories we bring up:

Google’s patent application

Some scientific benefits of being bored

The daydreaming and task performance paper

The planning and the functionality of mind-wandering paper

Resource security megatrend and the food crisis 

The world could run out of food two decades earlier than thought

China’s plan to cut meat consumption by 50%

Less Meat, Less Heat with James Cameron & Arnold Schwarzenegger

WeWork eliminates most meat from its menus and employees can’t expense it either

Our discussion of going meat free at (we)work

This account shows everyone is living the same life on Instagram

@insta_repeat

Listenhttp://sbi.sydney.edu.au/the-future-this-week-17-aug-18-productivity-food-crisis-and-changing-stories/


Leave a Reply

Your email address will not be published.

You may also like

series of yellow characters on top of yellow rods, focusing on a smiling character

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more
image of a knot made up of a blue rope and a red rope

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more