Futures Forum: The promises of technological innovation >>> The World in 2016
And a lot of the expectations are around what jobs these new technologies will bring:
Futures Forum: Creating/destroying jobs >>> Creative Destruction and Artificial Intelligence
Futures Forum: Artificial Intelligence: 'complements labor and increases its productivity'
Although there are also huge doubts about those promises:
Futures Forum: Artificial Intelligence: is it humanity's greatest 'existential risk'?
Futures Forum: Technological unemployment and the Luddite fallacy
2017 Will Be the Year of AI | Fortune.com
5 Big Predictions for Artificial Intelligence in 2017 - Technology
Google's AI Has Reinvented the Master Language | Foundation for Economic Education
This Is How AI Will Change Your Work In 2017
Don't fear artificial intelligence.
It's what's going to help you do your job faster and better in 2017.
LYDIA DISHMAN 01.04.17
Artificial intelligence is growing fast. Recent research puts it as a $5 billion market by 2020, and Gartner estimates that 6 billion connected "things" will require AI support by 2018. Connected machines, wearables, and other business tools like voice assistants are already boosting productivity at work and at home.
Two reports just surfaced that tackle the troubling predictions that automation, artificial intelligence, and robots are going to supplant human workers. The McKinsey Global Institute and Glassdoor research indicates that we don’t have to worry that we humans will become obsolete.
Matt Gould, chief strategy officer at Arria NLG, U.K.-based company offering AI technology in data analytics and information delivery, explains that AI has the ability to distill expertise into the machine. "Knowledge work, for the first time, can be produced at volume from NLG-AI systems," Gould says. "Far from killing the jobs of knowledge workers, this tends to free them up to do what they are paid to do—innovate, model, refine, and improve on the expertise of their business."
This Is How AI Will Change Your Work In 2017 | Fast Company | Business + Innovation
Why Artificial Intelligence is the answer to the greatest threat of 2017 - cyber-hacking
Protecting ourselves raises an interesting dilemma. What level of monitoring and activity reporting are you prepared to put up with to enable more accurate or earlier collaborative identification of malice?
Why Artificial Intelligence is the answer to the greatest threat of 2017, cyber-hacking | The Independent
The predictions are not all good:
Forget ideology, liberal democracy’s newest threats come from technology and bioscience
The accountability of artificial intelligence systems, from Facebook to healthcare, is shaping up to be a hot topic in 2017
Combing through our metadata: Facebook chief executive Mark Zuckerberg.
LYDIA DISHMAN 01.04.17
Artificial intelligence is growing fast. Recent research puts it as a $5 billion market by 2020, and Gartner estimates that 6 billion connected "things" will require AI support by 2018. Connected machines, wearables, and other business tools like voice assistants are already boosting productivity at work and at home.
Two reports just surfaced that tackle the troubling predictions that automation, artificial intelligence, and robots are going to supplant human workers. The McKinsey Global Institute and Glassdoor research indicates that we don’t have to worry that we humans will become obsolete.
Matt Gould, chief strategy officer at Arria NLG, U.K.-based company offering AI technology in data analytics and information delivery, explains that AI has the ability to distill expertise into the machine. "Knowledge work, for the first time, can be produced at volume from NLG-AI systems," Gould says. "Far from killing the jobs of knowledge workers, this tends to free them up to do what they are paid to do—innovate, model, refine, and improve on the expertise of their business."
This Is How AI Will Change Your Work In 2017 | Fast Company | Business + Innovation
Why Artificial Intelligence is the answer to the greatest threat of 2017 - cyber-hacking
Protecting ourselves raises an interesting dilemma. What level of monitoring and activity reporting are you prepared to put up with to enable more accurate or earlier collaborative identification of malice?
John Clark Monday 9 January 2017
Our lives are now heavily mediated by digital technology (music streaming, social media, e-banking etc). We are increasingly and often continuously online, open to engagement in a myriad of services and simultaneously open to cyberattack.
2016 saw further high profile and financially driven security incidents, such as Tesco and TalkTalk, together with one of the highest profile attacks ever – the apparent compromise of the Democratic party’s information systems with potential influence on the US Presidential Election. We now need to defend against the lone wolf hacker, organised crime and terrorism, and nation states with well-funded advanced capabilities.
Our lives are now heavily mediated by digital technology (music streaming, social media, e-banking etc). We are increasingly and often continuously online, open to engagement in a myriad of services and simultaneously open to cyberattack.
2016 saw further high profile and financially driven security incidents, such as Tesco and TalkTalk, together with one of the highest profile attacks ever – the apparent compromise of the Democratic party’s information systems with potential influence on the US Presidential Election. We now need to defend against the lone wolf hacker, organised crime and terrorism, and nation states with well-funded advanced capabilities.
Why Artificial Intelligence is the answer to the greatest threat of 2017, cyber-hacking | The Independent
The predictions are not all good:
Forget ideology, liberal democracy’s newest threats come from technology and bioscience
A groundbreaking book by historian Yuval Harari claims that artificial intelligence and genetic enhancements will usher in a world of inequality and powerful elites. How real is the threat?
John Naughton 28 August 2016
The BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge social anthropologist. “Men have become like gods,” Leach began. “Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid.”
That was nearly half a century ago, and yet Leach’s opening lines could easily apply to today. He was speaking before the internet had been built and long before the human genome had been decoded, and so his claim about men becoming “like gods” seems relatively modest compared with the capabilities that molecular biology and computing have subsequently bestowed upon us. Our science-based culture is the most powerful in history, and it is ceaselessly researching, exploring, developing and growing. But in recent times it seems to have also become plagued with existential angst as the implications of human ingenuity begin to be (dimly) glimpsed.
The title that Leach chose for his Reith Lecture – A Runaway World – captures our zeitgeist too. At any rate, we are also increasingly fretful about a world that seems to be running out of control, largely (but not solely) because of information technology and what the life sciences are making possible. But we seek consolation in the thought that “it was always thus”: people felt alarmed about steam in George Eliot’s time and got worked up about electricity, the telegraph and the telephone as they arrived on the scene. The reassuring implication is that we weathered those technological storms, and so we will weather this one too. Humankind will muddle through.
But in the last five years or so even that cautious, pragmatic optimism has begun to erode. There are several reasons for this loss of confidence. One is the sheer vertiginous pace of technological change. Another is that the new forces at loose in our society – particularly information technology and the life sciences – are potentially more far-reaching in their implications than steam or electricity ever were. And, thirdly, we have begun to see startling advances in these fields that have forced us to recalibrate our expectations.
Forget ideology, liberal democracy’s newest threats come from technology and bioscience | John Naughton | Opinion | The Guardian
Algorithms: AI’s creepy control must be open to inspection
John Naughton 28 August 2016
The BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge social anthropologist. “Men have become like gods,” Leach began. “Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid.”
That was nearly half a century ago, and yet Leach’s opening lines could easily apply to today. He was speaking before the internet had been built and long before the human genome had been decoded, and so his claim about men becoming “like gods” seems relatively modest compared with the capabilities that molecular biology and computing have subsequently bestowed upon us. Our science-based culture is the most powerful in history, and it is ceaselessly researching, exploring, developing and growing. But in recent times it seems to have also become plagued with existential angst as the implications of human ingenuity begin to be (dimly) glimpsed.
The title that Leach chose for his Reith Lecture – A Runaway World – captures our zeitgeist too. At any rate, we are also increasingly fretful about a world that seems to be running out of control, largely (but not solely) because of information technology and what the life sciences are making possible. But we seek consolation in the thought that “it was always thus”: people felt alarmed about steam in George Eliot’s time and got worked up about electricity, the telegraph and the telephone as they arrived on the scene. The reassuring implication is that we weathered those technological storms, and so we will weather this one too. Humankind will muddle through.
But in the last five years or so even that cautious, pragmatic optimism has begun to erode. There are several reasons for this loss of confidence. One is the sheer vertiginous pace of technological change. Another is that the new forces at loose in our society – particularly information technology and the life sciences – are potentially more far-reaching in their implications than steam or electricity ever were. And, thirdly, we have begun to see startling advances in these fields that have forced us to recalibrate our expectations.
Forget ideology, liberal democracy’s newest threats come from technology and bioscience | John Naughton | Opinion | The Guardian
Algorithms: AI’s creepy control must be open to inspection
The accountability of artificial intelligence systems, from Facebook to healthcare, is shaping up to be a hot topic in 2017
Luke Dormehl 1 January 2017
The past year marked the 60th year of artificial intelligence – and, boy, did it have a lively birthday. Pop open a computer science journal on your laptop during 2016 and you’d be assured that not only was progress happening, but it was doing so much, much faster than predicted. Today, AI and algorithms dominate our lives – from the way financial markets carry out trades to the discovery of new pharmaceutical drugs and the means by which we discover and consume our news.
But, like any invisible authority, such systems should be open to scrutiny. Yet too often they are not open and we are not even fully aware that such systems play the roles they do. For years now, companies such as Amazon, Google and Facebook have personalised the information we are fed; combing through our “metadata” to choose items they think we are most likely to be interested in. This is in stark contrast to the early days of online anonymity when a popular New Yorker cartoon depicted a computer-using canine with the humorous tagline: “On the internet, nobody knows you’re a dog.” In 2017, not only do online companies know that we’re dogs, but also our breed and whether we prefer Bakers or Pedigree.
The use of algorithms to control the way that we’re treated extends well beyond Google’s personalised search or Facebook’s customised news feed. Tech giants such as Cisco have explored the way in which the internet could be divided into groups of customers who would receive preferential download speeds based on their perceived value. Other companies promise to use breakthroughs in speech-recognition technology in call centres: sending customers through to people with a similar personality type to their own for more effective call resolution rates.
It is a mistake to always decry this kind of personalisation as a negative. The futurist and writer Arthur C Clarke once noted that any sufficiently advanced technology is indistinguishable from magic. Most of us will have had the awed feeling of watching a really good magic trick when their smartphone uninvitedly pops up a relevant piece of information at just the right moment – like your iPhone remembering where your car is parked.
But, like any invisible authority, such systems should be open to scrutiny. Yet too often they are not open and we are not even fully aware that such systems play the roles they do. For years now, companies such as Amazon, Google and Facebook have personalised the information we are fed; combing through our “metadata” to choose items they think we are most likely to be interested in. This is in stark contrast to the early days of online anonymity when a popular New Yorker cartoon depicted a computer-using canine with the humorous tagline: “On the internet, nobody knows you’re a dog.” In 2017, not only do online companies know that we’re dogs, but also our breed and whether we prefer Bakers or Pedigree.
The use of algorithms to control the way that we’re treated extends well beyond Google’s personalised search or Facebook’s customised news feed. Tech giants such as Cisco have explored the way in which the internet could be divided into groups of customers who would receive preferential download speeds based on their perceived value. Other companies promise to use breakthroughs in speech-recognition technology in call centres: sending customers through to people with a similar personality type to their own for more effective call resolution rates.
It is a mistake to always decry this kind of personalisation as a negative. The futurist and writer Arthur C Clarke once noted that any sufficiently advanced technology is indistinguishable from magic. Most of us will have had the awed feeling of watching a really good magic trick when their smartphone uninvitedly pops up a relevant piece of information at just the right moment – like your iPhone remembering where your car is parked.
No comments:
Post a Comment