for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
The real story behind the fake story
Anne says: Have you heard of Laboratoires Berden, founded in 1996 by Eric Dumonpierre?
Dumonpierre was an award-winning CEO, recognised for his social responsibility initiatives – including 32 hour work week, planting trees, using hybrid cars before anyone knew what they were – the list goes on. Dumonpierre was a celebrity CEO until the mid-2000s when some scandals rocked the share price and his reputation. However, both recovered. But in 2014 – it was all over?? What happened?
Well – it’s simple – Laboratoires Berden didn’t exist. Nor did Eric Dumonpierre!! Both were created or rather fabricated by students from HEC Paris International Business School, starting in 2005. The course addressed corporate reputation and crisis management in the age of the internet. The students’ assignment:
“Create a company and a CEO, and raise their profiles online so that they rise to the top of search engine rankings for general terms related to the firm.”
The project included 2 groups of students: one group building the company and its CEO with false stories, and the second group creating reports of scandal and misdeeds, using the same tools and the same rules.
More than the engaging way to teach a course, the outcomes over the 10 years, has provided the academics with some insights into what works and what doesn’t as far as spreading false stories through social media and internet resources. Some of the key findings:
“Readers are more likely to distribute vivid stories that inspire emotion — such as fear (polluted rivers), disgust (child labor), and surprise or delight (32-hour workweek) — than stories that are flatly recounted.
The students also boosted believability through repetition, reposting and relinking stories across the ecosystem of sites and accounts they had created, until eventually the algorithms lifted those stories to the top of search-results lists. Researchers have shown that repetition increases the perceived accuracy of false news. In short, familiarity creates credibility.”
Why is this relevant? In today’s fake news context, it demonstrates how easy it can be to develop false stories, spread them and create credibility. The authors suggest that fake journalists could produce fake news about organisations, their reputations, their executives and cause substantial corporate damage. In response, organisations need to develop the capabilities to monitor and identify these activities before they’re allowed to go viral or stick to corporate reputation. Quite a challenge I would say, based on the effectiveness of these students’ efforts!
Read: https://hbr.org/2018/07/the-real-story-of-the-fake-story-of-one-of-europes-most-charismatic-ceos
Why We Should Require All Students to Take 2 Philosophy Courses
Nat says: According to the chronicle of higher education, students need more philosophy in their university studies — regardless of what degree they are studying. Of course, I can’t help but agree. Even in my current PhD context, I am incredibly mindful of the word “philosophy” comprising part of the degree’s title, as so many of my peers in other disciplines (namely those following the scientific method) have no concept, or care factor, about things like ontology and epistemology. It seems that in their context, they do not need to question what is the nature of reality, and how can we know and create knowledge about said reality, because the scientific method is seen as the only way to study and explain our “objective world” through the use of mathematics. Frighteningly, I’m not kidding. I actually know people with PhDs — you know, people who have made contributions to knowledge and are now DOCTORS of philosophy, who have no concept of questioning the very foundations for what their whole thesis is predicated on. I remember being so mindful of this that when I told my supervisor I want my PhD to be philosophical, he laughed and told me that I couldn’t get more philosophical; given what my topic is about and the “worldview” I am taking in exploring it. Yet I still wonder why this philosophical element goes unnoticed in many other research and academic contexts.
The shared article lists the core questions that philosophy courses would allow students to think about, namely:
- “Questions of Identity” (Who am I? Who are we?)
- “Questions of Purpose” (Why are we here? What’s it all for?)
- “Questions of Virtues and Vices” (What is truth? What is beauty? What is morality?)
- “Questions of Existence” (What does it mean to be alive, to die, indeed, to be? Or not to be?)
These are most likely questions you’ve asked yourself throughout your life, so it seems fitting that universities would ask such questions of their students; hopefully prompting discussion and debate in class. I for one, as someone who teaches in a business school, try to bring philosophy into the classroom every opportunity I get. Honestly, I can’t help it, as my mind seems to work that way naturally. But I wished others would support the philosophical movement, as well as students themselves. Instead of students asking questions like “Will this be on the exam?”, or when they answer your question of “What’s the goal of business?” as “profit”, maybe philosophically-inspired questions will encourage them to think beyond what they have been taught most of their lives. Ironically, including philosophy on the learning syllabus will likely lead to a lot of “unlearning” for students as well.
Read: https://www.chronicle.com/article/Why-We-Should-Require-All/243871
The Biometric Mirror and the ethical use of AI
Helen says: I have hit on the subject of AI a few times in past articles, but it is not surprising given its ever extending reach into our lives. One of the big discussions out there is, how do we ensure the ethical use of AI. This article is an interesting read on the topic and its reference to the sci-fi thriller Minority Report gave me a bit of a chill.
In an effort to stimulate this discussion and to help us to better understand AI decisions, researchers at the University of Melbourne have developed the Biometric Mirror. This programme compares a facial image against thousands of facial images that have a data set built from crowdsourced feedback on these images. It returns a profile of the person’s demographics and information about the person’s mood, nature, attractiveness and other psychometric data. The results demonstrate that whilst the algorithm is correct, the information it returns is not and this is due to its subjective nature.
Of concern is the negative consequences of the use of biased or flawed AI information could have on an individual. It is encouraging to see an Australian research initiative raising awareness and hopefully generating broader discussion on this important topic.
Google to let you pop its AI chips into your own computer as of October
Joel says: Google, one of the biggest companies in the world in the tech space and one of the top companies actively working in the area of artificial intelligence, will soon be letting customers use their custom built AI processors in their own technologies.
Google’s TPUs, or tensor processing units, accelerate AI tasks like understanding voice commands or recognizing objects in photos. Today, Google will let you pay to do that kind of work on its cloud-computing infrastructure having them take the brunt of the processing work through a program they call Edge TPU. Google will let programmers install the TPUs in their own machines from October this year.
AI, also often called machine learning or deep learning and using a brain-inspired technology called neural networks, is a profound change in computing. It lets people train computers using real-world data and figure out patterns for themselves, like what a pedestrian in front of a self-driving car looks like or how to pick the right exposure for a sunset photo.
Letting customers use Google’s AI chips in their own computers could significantly expand the number of customers and the number of jobs interested in Google’s AI technology.
Google’s move also makes it a tighter competitor with Microsoft’s Azure computing service.
Check out the full article to learn more about these tiny but powerful processors and find out how you can apply for access to them if you’re interested in developing on the Google cloud platform.
A 4-Day Workweek? A Test Run Shows a Surprising Result
Jakkii says: I’ve got to be honest, the only way the results from this would’ve surprised me is had it shown a (statistically significant) negative impact on productivity, output or measurable outcomes.
So, imagine my non-surprise to discover that, in fact, this test run showed the opposite:
The firm, Perpetual Guardian, which manages trusts, wills and estates, found the change actually boosted productivity among its 240 employees, who said they spent more time with their families, exercising, cooking, and working in their gardens.
If you’ve ever read anything about other companies – and cities – who’ve trialled this, or reports about how much time we actually spend productively working throughout our workdays (the article touches on each of these), then this will likely come as little surprise to you as well.
I think one of the most interesting things about the future of work in knowledge work roles, is that we still view remuneration as time-dependent predominantly, and not primarily outcomes-based. Sure, if your performance against agreed measures isn’t up to scratch you might find eventually yourself without a job, but receipt of your salary is based on the understanding you will work X hours per week at Y hourly rate to ultimately receive an annual salary. Similarly, projects and consulting are often time-based as much as they are deliverables-focused: what’s the hourly rate, how many hours of effort are involved? Many of us spend time on a daily or weekly basis – or once a quarter if we’re really bad at it – measuring our own time spent broken down to the exact minute in some places in order to complete timesheets whether for billing or to justify our own existence. I’ve worked in places where, although all participating in non-customer facing knowledge work people are hyper-aware of everything their colleagues are doing, relating it to the “time” invested in work vs other things – even monitoring how long people spend in the restroom, yet never once identifying the inherent irony in the time they’re spending monitoring & policing other’s behaviour when they themselves should be working.
I think we must question whether this model is best-fit for knowledge work and whether our focus on time is really the key to productivity. While this article and this ‘test’ by Perpetual Guardian are themselves time-focused, I see these types of ideas as part of the process of questioning and, hopefully, evolving our ideas about what work is in the information era, and how we get the best outcomes from that work. That said, applauding ourselves for “paying staff for 40 hours though they only work 32 hours” is, I think, missing the mark: it is still inherently focused on time spent “at work” as the single best measure of an employee. Is that really the best we can do?
What do you think? What might the ideal model of work and remuneration in the future look like?
Read: https://www.nytimes.com/2018/07/19/world/asia/four-day-workweek-new-zealand.html
Sydney Business Insights – The Future This Week Podcast
This week: A Vivid Ideas special debate with Rachel Botsman and Mike Seymour: Can I marry my Avatar?
Listen: http://sbi.sydney.edu.au/digital-humans-special-on-the-future-this-week/