Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.

 

This Chatbot Helps Refugees Prepare for Asylum Interviews

Jakkii says: Yesterday I was at a symposium – part of the ever excellent Swarm community management conference here in Australia – that discussed research into AI and automation, including chatbots. As part of the discussions around these subjects – and indeed during the conference proper on Wednesday – was the topic of trust: trust in one another, in and within communities, and in and of data and machines.

It’s timely, then, that I happened upon this piece about a chatbot designed to help refugees. One of the aspects I found particularly fascinating was that the MBA students behind the bot were inspired through the research of a law professor at UC Berkley, Katerina Linos. Linos found that one of the reasons refugees turn to smugglers instead of applying for asylum is that they do not trust the information about asylum, and about their rights. Through the development team’s own desk and user research they identified a key information gap: how to navigate the asylum process. It is this gap the app, MarHub, aims to fill.

What I particularly took from this piece was somewhat of a confirmation of the many discussions I’d been fortunate to be part of this week, examining the role and power of community managers in developing and maintaining trust. Whilst this is somewhat tangential to the article itself, the underlying issues are the same – without learning how to navigate these issues of trust and working to build trust in the application and its information, the app would not be successful, and would not be able to help those it sets out to serve. This is certainly the case for communities as well, whether they are communities of interest or practice, or communities for customers, partners or employees of a business or brand.

And of course it doesn’t hurt learning the good news story about an app that is trying to help vulnerable people through empowering them with knowledge, either. 😉

Readhttps://www.fastcompany.com/40456557/this-chatbot-helps-refugees-prepare-for-asylum-interviews

465,000 patients need software updates for their hackable pacemakers, FDA says

Nat says: We rarely think about the internet of things (IoT) in relation to our own bodies, but this is exactly what happens, and what currently is happening, in the bio-technology space. Out of all the types of implantable devices we can have inside our bodies, the ones associated with our hearts, such as pacemakers, seem somewhat important.

However, in the shared article, it has been reported that nearly half a million patients in the US require a software update on their pacemaker, as it is possible for hackers (when at a close distance) to take control of pacemakers, deplete the battery, or accelerate the pace. The US Food and Drug Administration (FDA) released this statement:

The FDA reminds patients, patient caregivers, and health care providers that any medical device connected to a communications network (e.g. Wi-Fi, public or home Internet) may have cybersecurity vulnerabilities that could be exploited by unauthorized users.

Luckily, no one has yet to be hacked, but with the growing rise of Artificial Intelligence (AI) and the “internet of healthcare”, the security of bio-tech has become a prevalent topic of conversation in relation to safety. We are now in an age for which the security problems of IT can mean the potential difference between life and death.

Readhttps://motherboard.vice.com/en_us/article/nee5bw/465000-patients-need-software-updates-for-their-hackable-pacemakers-fda-says

Instagram’s Bold Plan to Block Hateful Comments Using AI

“…this is about making the internet better. I hope the technology and (machine) training we develop can help build a kinder, safer, more inclusive community online”. –Instagram’s Kevin Systrom

Emilio says: Could Instagram be better by making it a safe, rosy place?

Instagram CEO Kevin Systrom, in an interview with Wired, says they want to make the image-sharing social network not necessarily rosier but a safer, less hateful place. With a more robust community moderation system, Instagram wants to ensure the platform of 700 million users is purged of the trolliest of trolls – or what he estimates to be the bottom 5% of nasty users. They are taking a step further not only by filtering bad comments, but also by elevating positive comments.

What’s revolutionary about this system is that it uses machine learning and AI which, Systrom claims, detects bad and mean comments with near perfect accuracy. Integrating the ‘Deep Text’ technology from its parent company Facebook, supported by an army of real humans gathering feedback from the community, this system looks good on paper – but will it work? Will it further curtail freedom of speech and neutrality? Will it deter users from sharing what’s really in their minds?

Watch the interview here and let us know your thoughts in the comments.

Readhttps://www.wired.com/video/2017/08/instagram-s-plan-to-deleted-hateful-comments-using-ai/

One billion drones in world by 2030, futurist says

Joel saysWe had a discussion amongst ourselves internally this week about drones after I raised that I believed they would become highly disruptive in the future, used amongst many industries and in applications we probably hadn’t even thought of yet. It’s good to see we were onto something after reading this tech article from The New Daily.

“Drones will become the most disruptive technology in human history,” American futurist Thomas Frey says, predicting that by the year 2030, there will be one billion drones in the world doing things people cannot yet imagine.

There are limitless possibilities for cities of the future. It will just take people adding a few new dimensions to their thinking as the applications are endless.

“In the future drones are going to have multiple capabilities, so let’s not think of them as little flying cameras. They can also roll on the ground, they can stick to the side of a building, float in the river, dive under water… they can climb a tree and attach themselves like a parasite to the side of a plane. A driverless car is a drone.”

The article then ventures into common concerns raised by the masses around the dangers of a drone-driven society, and how privacy laws are constantly playing catch up with the drone industry. Well worth a read if you’re interested in what our future may one day look like.

Readhttp://thenewdaily.com.au/life/tech/2017/08/31/one-billion-drones-world-2030-futurist-says/


1 Comment

Leave a Reply

Your email address will not be published.

You may also like

series of yellow characters on top of yellow rods, focusing on a smiling character

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more
Photo of a crowd of people dressed in business casual walking in a city setting, focusing on a blonde woman in the centre

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more
image of a knot made up of a blue rope and a red rope

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more