for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
The WeWork Manifesto: First, Office Space. Next, the World
(Warning: long read)
Anne says: Are you familiar with WeWork – the global network of shared offices? There’s 5 locations across Sydney and Melbourne.
You may think WeWork is just another one of these funky shared office spaces, think Regus but cool. You may be partially correct, their spaces are well designed to create a particular genre of working.
Did you know that WeWork is valued at around US$20 billion? That’s a lot of shared offices – in fact, more than 200 global locations. But WeWork has broader ambitions than shared office spaces. This article reveals the bigger picture, including interviews with the founders and their views of the future workplace – including where we live, where we workout, where we play, and where our children are cared for. It’s revealing – but there’s also something niggling me about homogeneity – tribes, nationalism, sameness, Lord of the Flies…
The current project, Dock 72, is touted to include: “…an enormous co-working space, a luxury spa and large offices, for other companies like IBM and Verizon, that are designed and run by WeWork. There will be a juice bar, a real bar, a gym with a boxing studio, an outdoor basketball court and panoramic vistas of Manhattan. There will be restaurants and maybe even dry cleaning services and a barbershop.”
“It will be the kind of place you never have to leave until you need to go to sleep — and if Mr. Neumann (co-founder WeWork) has his way, you’ll sleep at one of the apartments he is renting nearby.”
Neumann talks about bringing people together – they’ve already expanded into WeLive apartments, an elementary school called WeGrow, and have acquired company’s like Meetup. Bringing people together.
But where is the limit? When is togetherness too much?
The village concept enables community living – but not in the same building, nor the same workplace – let alone the same gym, the same school, the same child care services… I’m expecting the mention of shared aged care services next – so far not mentioned, but with the ageing population and people working longer is there a proposition for WeRetire accommodation?
Take the time to read this article and ponder if this “We-ness” is the future of work and healing fractured societies – or is it emerging into something else for a specific target audience?
Can AI Be Trusted With Life-And-Death Decisions?
Nat says: If there’s one thing the 2004 movie I, Robot taught us is that human judgment cannot be trusted. The laws of robotics, as depicted in the film, state that a robot cannot harm a human, a robot must obey human orders, and a robot must protect its own existence so long as its protection does not conflict with the previous two laws. However, the very message of the film is not necessarily the robots defying these laws and rising up in defiance against their creator, but in the robotic realization that we as humans were defying the laws we created for our robots; and we therefore cannot be trusted in our decision making. The robot uprising was, in theory, for our own protection, as the robots were acting in a way that was still aligned with the three laws, but our interpretation of their actions (as depicted in the film) was what confused us. We realized we had the potential to create machines that could decide our fate.
Now I know that I, Robot is just a film, but the role of art in society has always revealed hidden messages to us, especially about our use of technology. The shared article talks about decision making in relation to autonomous vehicles, arguing that we cannot put life-and-death decisions into the hands of our created AI – that the human passenger, in certain scenarios, will still need to take the wheel and steer the car. Part of this reason is a legality. If the car ‘decided’ (which is nothing more than a human-programmed algorithm taking effect), for example, to save the life of the passenger inside the car rather than the pedestrian on the street, who would be liable? The car’s programmer? The car’s owner? The car’s manufacturer? The second reason is that the decision-making ability of the AI is not as advanced as we might like to think.
Unlike more primitive computer algorithms which are based on ‘if-then’ rules, self-learning algorithms are trained with existing data for which they recognise patterns, and they then learn to apply those patterns to new data situations, so long as they are of a similar nature. There are no ‘rules’ to follow as the algorithm instead learns to adjust itself in response to new data. This new data, however, is fleeting and arguably infinite. A human in a life-or-death situation reacts on gut instinct, but an AI is making a type of choice based on their understanding of past data. Just think of the start of I, Robot which gets explained later in the film – Will Smith and a young girl are drowning in a sinking car, but the robot chooses to save Will Smith instead of the girl, as the robot’s calculation of survival odds favoured the man. A human, however, would have likely chosen to save the young girl.
My point is that a human is a conscious being who can do things like love someone but hate what they have done, and act based on humanitarian drivers. A machine, however, is something that has no concept of reality or even its own existence. Scarily, the presence of AI in our life is growing, and stories like I, Robot are showing our own creations as being an existential risk. In a similar albeit more catastrophic vein than that depicted in the movie, imagine the following scenario: Would a super intelligent AI make the ethical decision to keep both its maker and the planet alive, despite the fact it could survive without either and merely be powered by the sun? Otherwise known as Moravec’s paradox, we have programmed our machines to be logic-based in which they work in a manner that is different to our own biological processes, which encompass more than just ‘cognition’ as our lives unfold in holistic contexts. Without sentient awareness, AI may become so advanced and integrated into our lives that it might make the logical conclusion that humans are a plague to the planet, and its decision will therefore be to eradicate us all – that our death might be the only state in which to keep us peaceful. The super intelligent machine cannot fathom basic human rights, or gauge right and wrong based on intuitive judgment. In creating our AI and discussing its role in life-and-death scenarios, we are failing to realise that what we might be programming, in the long run, is our very own demise.
Has automation got your job in its sights?
This week, the retrenchment of six thousand National Australia Bank employees came into effect. At the time of the announcement, last November, the bank reported a profit exceeding $5 billion, so poor earnings was not the issue. The culprit of this significant staff downsizing (1 in 5) is none other than technology.
Daniel Ziffer’s report on ABC’s The Business looks at the shifts in job types and the speed at which these changes are occurring. Routine-based jobs are most at risk, and occupations requiring people skills will be impacted less. However, it seems that all job types will be impacted by technology in one way or another. Dominic Barton, consultant at McKinsey and Company, quoted that “for 60 per cent of jobs, 30 percent of the activities are automatable. We’re not waiting five years, that’s happening now.”
Technology may be replacing jobs, but it is also creating new ones. Re-skilling will be essential for many current workers, and those yet to enter the workforce will need to carefully consider their career options, or they might find themselves graduating for a job that no longer exists.
Linked to this article is an online tool, created from data compiled by the research company AlphaBeta, Could a robot do your job? Here you can search hundreds of jobs to find out to what to extent they are expected to be impacted by technology. Have a go and see if you are likely to be re-skilling anytime soon.
Pro-Gun Russian Bots Flood Twitter After Parkland Shooting
“The goal (of the bot creators), after all, isn’t to help one side or the other of the gun control debate win. It’s to amplify the loudest voices in that fight, deepening the divisions between us.”
Emilio says: And they have struck again! Those ‘WMDDs’ once more have fired up and sowed confusion on the Twittersphere following last week’s horrific Parkland, Florida shooting massacre, in which 17 school kids were killed.
In a Friday Fave post last year, I coined the phrase ‘Weapons of Mass Distraction and Disinformation’ (WMDDs) to refer to the army of Twitter bots by the millions – fake users and Twitter puppets – who exist for one thing and only one thing: dominate online conversations and cloud the facts around the issues, mostly political.
While Anne last week touched on the ilk of automated bots that are useful (yes, not all Twitter bots are evil), the wicked kind have reared their ugly heads in the wake of the Parkland tragedy, confounding the issue besieging America today: gun control.
From mere hours after the massacre, RoBhat Labs’ Botcheck.me, a website that tracks political propaganda bots, has seen a surge in activity by bots who have latched onto the trending hashtags #Parkland, #guncontrol, and #guncontrolnow.
Meanwhile, another website that monitors activity of Russia-linked accounts have reported that those bot armies have also been hard at work, tweeting links to articles that debunk the claims of the pro gun control side. The theory is that the Russian accounts are doing this cunningly to grab attention, in order to then protect the Kremlin and push its agenda in the ongoing investigations over alleged Russian interference in the US elections.
The gun control debate is all muddled up on Twitter, and the confusion over the facts surrounding the issue rages on.
Now more than ever, in the era of bots and fake news, everyone needs to be critical of every piece of information seen on the internet. Lest we become unwitting pawns in the propaganda of altered perception, and slain by the Weapons of Mass Distraction and Disinformation (WMDDs).
Apple Park Employees Reportedly Can’t Stop Walking Into Its Glass Walls
Jakkii says: What’s that? An article not about our dystopian future driven by the massive technology companies of today? Say it ain’t so, Joe!
It’s true though – this week, I bring you something lighter: Apple employees keep narrowly avoiding broken noses by walking into glass walls at Apple’s Apple Park headquarters. Assuming there isn’t a widespread attempt to have Apple pay for nose jobs amongst the employee populace, it’s rather deliciously ironic that employees are allegedly so distracted by Apple’s own products they find themselves physically running into walls in an environment so painstakingly and exactingly designed. To add salt to the wound, employees efforts to add signifiers have been thwarted – post-it notes on the glass have been removed because they “detracted from the building’s design.”
The whole thing is hilarious (though perhaps not for those injuring themselves by running into the walls), but it’s also a tremendous example of prioritising form over function and a good reminder why we need to strive to strike a good balance between the two in our designs.
Sydney Business Insights – The Future This Week Podcast
This week: where cars are going, banking on automation, and Apple glass, Facebook torture and robots in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week:
Other stories we bring up:
Robot of the week: