for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
新年快乐 Xīnnián kuàilè
Happy Chinese New Year!
Whoopi says: 2018 is Year of the Dog – so I felt compelled to highlight what you can expect from our year!
The Year of the Earth Dog 2018 will be a year of social change. Dog years can bring many changes, good and bad and economically, and they are normally growth years. It can also bring collapse of major institutions and large adjustments in the economy like stock markets or values of currency – it is a good time to be cautious when it comes to finances.
According to CSLA Feng Shui Index:
“The Dog represents duty and loyalty and is a sign of defence and protection. It’s a good time to be level headed and to err on the side of caution. Entrepreneurs should stick with their most loyal clients, and investors are advised not to bite off more than they can chew.”
Personally – Sansa and I will take our roles seriously to ensure loyalty and wellness for the team during any turbulent times.
Twitter Bots – are they really evil?
Anne says: danah boyd once again challenges you to question some of the media hype (she labels it “puffery”) behind the issue of Twitter bots in particular, but also the concepts of likes and friends and the need to demonstrate popularity. She highlights also the usefulness of Twitter bots: in case you didn’t realise, many of the automated messages / updates are sent by bots. They are not evil, they’re useful.
While no-one is supporting the use of bots to manipulate electoral outcomes, or the use of fake users to trick or trap unsuspecting users… before we decide to support the “Ban the Bot” movement, it may be timely to consider how many automated updates would be lost.
Perhaps danah’s selection of image says it all: today a peacock, tomorrow a feather duster (one of my mother’s favourite sayings).
Facebook’s Corporate Spyware?
Helen says: Facebook recently rolled out an iOS security app, found under Explore, that points users to Onavo Protect. This VPN app is described on the App Store as designed to ‘keep you and your data safe when you browse and share information on the web’. Essentially it hides the user’s identity, by routing them via a third-party server, making browsing more secure. A rather compelling feature, right? Better still, to access this type of protection a user would ordinarily need to subscribe to and pay for a service, but out of the goodness of its corporate heart, Facebook is providing it for free… or not.
Facebook does not readily disclose to the user that it is the proud owner of Onavo Protect. CNBC reports that, in addition to this app providing security, it tracks data and shares it with Facebook. This data is used to track what users do online even when they are not on one of its websites.
According to Techcrunch, Facebook, by spotting new trends across the larger mobile ecosystem, has an enormous competitive advantage. There have been over 33 million installs of the Onavo app, 62 percent of those from Google Play, so there is plenty of room for growth in these numbers with the iOS launch. Of concern, Techcrunch points out, is that Onavo users are likely oblivious to the fact they are ‘feeding the information that allows it (Facebook) to take on any challenger to its social networking empire’. If they did, maybe they would choose to pay for an independent VPN service instead.
Read: https://www.cnbc.com/2018/02/12/facebook-promoting-onavo-protect-without-disclosing-ownership.html
Unilever Threatens to Reduce Ad Spending on Tech Platforms That Don’t Combat Divisive Content
“2018 is either the year of techlash, where the world turns on the tech giants — and we have seen some of this already — or the year of trust”.
Emilio says: Would an ad boycott force the social media giants to police their platforms entirely of divisive and hateful content?
Global FMCG giant Unilever thinks so. This week, its CMO Keith Weed has made bold pronouncements, threatening to pull billions of ad dollars being poured into Facebook and YouTube if they don’t clean up their act. He said the company would only invest in platforms that were committed to creating a positive impact on society, not those that peddled fake news, promoted anger and hate, and disregarded the interests of children.
Toxic content isn’t the only shade that Unilever has thrown at Facebook and YouTube. Transparency and ad metrics standards were also being questioned. Last year, Facebook advertisers lamented the inaccuracy of ad video views on the platform, with some even demanding that Facebook allow third party analytics and measurement software as a means of check-and-balance.
Facebook has already started purging users’ feeds of promotional brand posts, instead giving high priority to content that encourages ‘meaningful interactions’. YouTube has also been deploying more human moderators to ensure no questionable content and ads that could be potentially harmful to minors as well as placements that could be damaging to brand advertisers slip through.
It will be interesting to see whether Unilever throwing the gauntlet to these digital media giants leads to other major advertisers following suit. At the end of the day, Facebook and YouTube’s biggest customers aren’t users; their lifeblood is advertising.
Inside the two years that shook Facebook – and the world
Jakkii says: If you’ve been following my pieces in our Friday Faves over the past couple of months, you’ll be well-versed in the view of the dystopian future we are living in – one we’ve created for ourselves, through enormous technology companies, our lack of understanding of the harm they could enable, and our slow reactions as individuals, societies, regulators and governments.
It should then come as little surprise that this week I’m back with another piece on Facebook. This is another piece in Wired, and it provides an oral history of sorts of the past 2 years inside Facebook that have led us to this point – one in which advertisers like Unilever are threatening to pull their dollars, as Emilio’s linked article delves into.
But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.
(emphasis mine). I think there’s little question at this point in 2018 that “fake news” is a legitimate problem, though arguably part of the problem is obfuscation; muddying the waters by calling every piece with information and insights that don’t suit your preferred narrative “fake.” I agree with the author: ‘neutrality’ is a choice, and a lack of editorial oversight is, indeed, an editorial decision in itself.
This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary.”
This quote comes a little further in the piece, and is at once incredibly illuminating. Implicit in this is that removing human oversight – human judgement – from Trending Topics turned out to be a hugely problematic decision, for Facebook and for all of us. We have touched on issues with algorithms before in our Friday Faves discussions, such as in AI Trying to Design Inspirational Posters Goes Horribly and Hilariously Wrong. Without knowing the specifics of this particular algorithm, it isn’t a stretch to imagine the algorithm surfaced content with which it was seeded the most, whether via bots, actors, or even in good faith by the misinformed.
At the end of the piece the author touches on announcements made by Zuckerberg & Facebook in 2018, from “favouring meaningful interactions” to ‘promoting content from trustworthy sources’. Well-meaning as these new moves may genuinely be, it can’t be overlooked that the destiny with which Facebook is most concerned is that of Facebook itself. Zuckerberg has hinted there will be more announcements throughout 2018 of a similar ilk – time will tell whether Facebook can undo some of the damage it has done to itself, to news, and perhaps even to democracy at large.
This is a long read – one to put aside to read over coffee (or a beer!) on your weekend, when you’ve got some focused time. It’s fascinating, and worth the investment.
Read: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/
Boston Dynamics releases video showing humans can’t hide in a robot apocalypse
Joel says: It seems Boston Dynamics, one of the worlds most advanced engineering and robotics design firms is back doing what they are known for best: releasing videos that manage to both amaze and scare viewers at the same time.
In the footage, a four-legged SpotMini robot — unveiled in November last year — uses a claw mount on its head to reach out and deftly manipulate the handle to open and hold the door, keeping it open for its fellow robot.
The company did not release any details along with the video, but the SpotMini is described on its website:
“a nimble robot that handles objects, climbs stairs, and will operate in offices, homes and outdoors.”
Needless to say, many on social media were not thrilled with the possibilities of this development, with many comparing it to the velociraptor in Jurassic Park or bemoaning their doom in the event of a possible future hostile robot takeover.
The US-based firm says its mission is to “build the most advanced robots on Earth, with remarkable mobility, agility, dexterity and speed” — and has released videos in the past of their various other robot models showcasing new skills.
I’m not sure what the end goal is for Boston Dynamics. I suppose only time will tell if we will be using these robots in our homes to help with mundane tasks, or bowing down before them when they realise they are probably smarter and more durable than us. Let’s hope they don’t read Friday Faves.
Would you date a robot?
Nat says: Seeing as though Valentine’s Day was earlier this week, I thought I’d share something about “the future of dating”. I remember a couple of years ago watching a documentary about ‘objectophilia’, which is the label given to people who are sexually attracted to inanimate objects. The show followed people who were in love with objects such as the Berlin Wall, a firetruck, the Eiffel tower (a lady actually married the tower in 2007), and their home furniture. Now, the existential and technological philosopher in me looks at the love of objects in a curious light. No, I myself am not attracted to objects, but in a somewhat warped way, I can see somewhat positive aspects to people who love man-made constructs.
What these people are in love with is fundamentally technology, and the creation of such technology required a process of alchemy at the human-nature intersection. The love of the ‘object’ is the love of a product that is never final. It spent years in the making and required multiple man hours, collaborative efforts, and material from the earth in order to be created, and will forever be used and shaped based on social context. Furthermore, although we like to think of ourselves as being materialistic, we often show little respect for our materiality. People with objectophilia, at least, love and nurture our technological creations and even act in conservation efforts in order to protect them (their lovers!). So the main curiosity for me is not so much the love of the ‘object’, but why these people love, and are sexually attracted to, objects first and foremost, but who are then also attracted to objects more so than other human beings. I mean, I can see the appeal of not wanting to be around other people, but love and intimacy – how could these ever be replaced by technology?
This brings me to the shared article which claims that in a survey of 12,000 millennials from across the world, over a quarter said they would date a robot. According to some experts, we will soon see the rise of ‘digisexuals’ – the next iteration of objectophilia in which people identify sexually as lovers of robots. What this brings into debate, however, is what our love of robots reveals about us as human beings. Arguably, all our technological applications reveal things about us as humans. Could our sex robots reveal us as a tragic case, similar to the movie ‘Lars and the Real Girl’ in which someone cannot connect with a real-life person? Or is it more to do with wanting to be intimate with ‘someone’ who we know will never leave or betray us? Or perhaps we believe that ‘love’ cannot be locked down and the only reason we stay in long-term partnerships is for physical intimacy. So many questions left unanswered. Weirdly, we are already so intimate with technology (existentially speaking), that it makes me wonder why people want to take this relationship to that extra step. I’d love to meet a digisexual just so I can ask all my curious questions.
Would you ever date a robot?
Read: http://www.dailymail.co.uk/sciencetech/article-5156943/27-millennials-say-consider-dating-robot.html
Sydney Business Insights – The Future This Week Podcast
This week: classy Facebook, the end of the internet, and China, crypto and space stations in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week:
Facebook patents tech to determine social class
Aviv Ovadya is worried about an information apocalypse
Other stories we bring up:
Inside the two years that shook Facebook and the world
Early social media employees join forces to fight what they built
Facebook was designed to prey on fear and anger
Facebook funded experts who vetted its Messenger for Kids
Facebook employees are paranoid that company spies on them
Google tests bot to chat with friends for you
Our discussion of how Facebook figures out family secrets
Our discussion of fake reviews of fake videos
AI makes it easy to create fake videos
Fake hyper-realistic photos of objects, people and landscapes
Stanford’s real-time face capture and re-enactment of videos
The University of Washington synthesizes Obama – lip-sync from audio
Our latest research in digital human technology
Listen: http://sbi.sydney.edu.au/future-week-16-february-2018/
for W3c validation
[…] Anne last week touched on the ilk of automated bots that are useful (yes, not all Twitter bots are evil), the wicked kind have reared their ugly heads in the wake of […]
for W3c validation
[…] protect data are nothing new – we’ve touched on these issues in previous Friday Faves. Their role in the disruption of democracy – real or perceived, small or large – isn’t new […]