Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive

Google Assistant’s new interpreter mode can translate conversations — but it’s not magic

Anne says: A couple of years ago Google launched it’s Pixel earbuds that directly translated conversations as you heard them (they didn’t receive very positive reviews). This year, they’ve gone a step further and the Google Assistant will be able to translate 27 languages (for starters) working on devices such as Google Home HubGoogle Home speakers, and third-party Google Assistant displays. It’s not fully available commercially as yet – they’re launching with a pilot in selected US hotels – if you’re in Las Vegas, Caesar’s Palace is one of them.

Here’s a basic explanation of how it works:

“…Google Assistant will show text across a smart display that translates your words as you speak. Afterward, it will open the microphone for the second person to be able to speak in their language and words will be translated across the screen at the same time. Google Assistant also plays back the words in your native tongue.”

That’s pretty clever!!!

Like most conference call systems, if more than 1 person is talking, it won’t work. And obviously it will slow down a natural conversation flow, but if I was a professional translator, I’d be starting to diversify my career plan somewhat!

For people who live and work in multi-lingual environments, it can provide an enhanced translation experience. There are some immediate use cases I can think of, like in hospitals – having been a patient in a non-English speaking environment, with a limited level of proficiency in medical terms, asking Google would have been extremely useful! Lots of workplace contexts – for example, conference calls.

However, I hesitate to fully announce the death of learning a foreign language. There are still many nuances that translators can’t provide, local idioms, contextual references and the ability to engage in natural conversation. And there’s also the benefit to neural flexibility that learning languages provides.

For the moment – this is an exciting technology development with lots of potential – just be a little sceptical of capabilities at this stage. (wink)

Watch the short video (above) – you’ll see some errors that, while amusing, could cause some serious problems in certain contexts.

Readhttps://www.theverge.com/2019/1/8/18170806/google-assistant-translate-languages-real-time-interpreter-ces-2019

Thinking Outside the Black Box

Jakkii says: Anne actually sent me this article to read, as it aligns with much of what I’ve talked about regarding Facebook, particularly over the past 12 months. In turn, I share it with you this week, as I believe it’s an important read. At an estimated 8 minutes, it’s not a super short read, but it’s a worthy one, and I encourage you to take time to read it, digest it, and reflect upon it.

Authored by Douglas Rushkoff, Professor of Media Theory and Digital Economics at the City University of New York, Queens College, the piece reflects upon Facebook’s failings before suggesting perhaps there’s something more we can learn from Facebook: “The real value we can derive from Facebook comes from interacting directly and purposefully with its dark innards: the algorithms themselves.”

How about when [ algorithms ] seem to know my worst fears and then play on them and exaggerate them to get me to respond? In other words, clickbait, personalized to my psychological profile, as determined chiefly by an analysis of my online behavior. Anyone who has followed the recommendation engine on YouTube knows that after delivering one or two innocuous videos, the “Up Next” cue serves up increasingly extreme content. The algorithms push us to become caricatures of ourselves. They don’t merely predict our behavior; they shape it. (emphasis added)

Disturbing, really, when you reflect upon it. Rushkoff goes further in discussing algorithms as well as efforts to obfuscate them and regain a ‘neutral internet’. Finally, he concludes that perhaps it’s what algorithms don’t see about us that might be the most human thing of all. He describes it as “the tiny bit of human mystery we have left,” suggesting that we must learn to love and encourage this mysteriousness before it’s too late. It’s an interesting view and, while he doesn’t dive into the philosophical, an interesting question about identity and what, indeed, makes us human. Alongside, of course, is the question of fallibility of algorithms – or, more accurately, of their human designers – even when they’re operating as expected. And there’s no escaping the ethical questions that lurk in the background – Rushkoff describes the secrecy around proprietary technology and algorithms, providing examples and suggestions of the damage they can inflict. This, in turn, leads back to questions of regulation. If we can’t trust companies to behave ethically, can we – and should we – mandate changes to force more ethical behaviour and more transparency from them? If the algorithms from Big Tech that shape the internet in turn shape our own behaviours, do we not have a responsibility to ourselves and to our societies to demand transparency, to demand answers and explanations, and indeed to demand ethical applications of such algorithms? If we can’t determine how driverless cars should make ethical decisions between ‘who to kill’ in extreme scenarios (the real-world driverless car version of the trolley problem), then how – and why – would we trust algorithms designed in secrecy to dictate the information we see, absorb and learn from online?

So many questions, and not nearly enough debate – let alone answers. Have a read of the piece and let me know – what do you think about what Rushkoff says here?

Readhttps://medium.com/s/douglas-rushkoff/the-real-value-of-facebook-19d1d6cb3003

Psychological barriers to the elevated future of mobility

Anne says: Here’s the big question: Would you travel in an air taxi? (That’s a driverless, automated flying car).

My initial reaction was sure, why not? Then I thought a little further – would I? So what’s going on here?

This article from Deloitte, part of a series on advanced vehicle technologies and the future of mobility, unpacks the key issue that will inhibit the adoption of flying cars – and the answer is really quite simple: people!

If we accept that the technology is possible and advancing at a rate that will enable us to commute through the air. And remember, air travel hasn’t always taken off (excuse the pun) – think of the Hindenburg… The key point highlighted:

“…to consumers, aerial vehicles seem, understandably, more inherently hazardous than earthbound vehicles.”

From here, I re-considered the question, and now I’m imagining flying transport hurtling around a few metres off the ground, probably following the road system, but I guess they wouldn’t have to – maybe they’d be better restricted to particular heights, clearing buildings but then again that might be a little too high for an autonomous air taxi. This means we’ll need regulators, we’ll need safety and governance models, we’ll need… a lot…

“…Unless ordinary people embrace this next-generation mode of transportation — incorporating airborne options into their daily lives along with more traditional modes — cars will likely stay earthbound. “

The article discusses at length public opinion and working with regulators, safety, governance and creators. The key to adoption will be social acceptance – without this, the industry will not be taking off anytime soon.

A really interesting article, while I encourage you to consider your own attitudes and what barriers to acceptance you may have.

Fasten your seatbelts, we have interesting times ahead!

PS. I was a fan of the Jetsons cartoon – I can really imagine flying around like that but in our current urban environment, that doesn’t seem as much fun as the Jetson approach!

Readhttps://www2.deloitte.com/insights/us/en/focus/future-of-mobility/psychological-barriers-to-elevated-mobility-autonomous-aerial-vehicles.html

New Year’s Resolutions Don’t Last. Do This Instead

Jakkii says: Do you make new year’s resolutions? Perhaps you set goals for the year but try not to call them ‘new year’s resolutions’ because you already know most people don’t stick to their resolutions. Or maybe you don’t make any at all, for the same reason.

I’ve never been big on new year’s resolutions. I tried making them for a while when I was much younger – mostly because I thought you were supposed to do so. However, I quickly realised I never stuck to them, and to be honest, it’s not that surprising when you think about what many traditional resolutions look like. They’re generally huge, life-changing goals, often unspecific in detail or timeframes, and require lots of effort for each one – and we often make several of these lofty promises at once, willfully naive to the reality of being a human and the inertia of change.

In these terms, making new year’s resolutions seems silly. And they are, in a way. Yet, goal setting itself is not silly; nor is the desire for learning, change and self-improvement. So, besides avoiding calling them ‘new year’s resolutions’, what can we do instead in order to find success? This article offers some solid advice:

  1. Begin with intention instead
  2. Start with the here and now
  3. Goals with intention
  4. Let heart and mind work together
  5. Staying on track

If you’ve set goals for the year – personal, professional or both – or want to do so, give this advice a read and see what you think. It just might help you make the right adjustments to your plans and tactics in order to meet those goals. I’d love to hear from you either way, though – do you set goals for the year? Why or why not? How do you keep yourself on track?

Readhttps://www.forbes.com/sites/nazbeheshti/2018/12/11/new-years-resolutions-dont-last-try-this-instead/#4c266d5059da

This Week in Social Media

Sydney Business Insights – In Conversation: The power of followership Podcast

The Future, This Week is on holiday hiatus, so this week we bring you a podcast from SBI’s ‘In Conversation’ series, “The power of followship.”

In this podcast: Professor Melissa Carsten studies what most of us are – followers.  There can be no leaders without followers, so we ask her “what makes a good follower?”

Professor Carsten has a PhD in Organisational Psychology from Claremont Graduate University. She has published widely in the field of organisational behaviour and human resources. Her research focuses on leadership and followership in organisations and the role that implicit followership theories play in leader-follower interactions.

Show notes and links for this episode:

Melissa Carsten, Professor of Management, Winthrop University

Melissa Carsten’s Followership articles:

Exploring social constructions of followership: A qualitative study

Followership theory: A review and research agenda

Followership:  What is it and why do people follow?

Listenhttp://sbi.sydney.edu.au/the-power-of-followership/


Leave a Reply

Your email address will not be published.

You may also like

4 days ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more

2 weeks ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more

3 weeks ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more

4 weeks ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more