Join discussions in order to build understanding of concepts in service science. Here is our curriculum guide.
Follow Jim (@JimSpohrer) on Twitter
About this site & registering.
Join discussions in order to build understanding of concepts in service science. Here is our curriculum guide.
Follow Jim (@JimSpohrer) on Twitter
About this site & registering.
Advanced analytics – from big data, to machine learning, to cognitive computing — is poised to and, in many cases, already is transforming enterprises in fundamental ways. Analytics is both an enabler and driver of enterprise transformation. The objective of this special issue is twofold – (i) to publish rigorous and innovative research on the role that analytics plays in enabling enterprises to transform by adopting these capabilities at scale in the digital era, and (ii) to gauge the pace of adoption of advanced analytics (e.g. big data, machine learning, cognitive computing etc.) to the above end.
We welcome papers that explore how analytics, in all its forms, is being adopted to transform processes across functions or entire organizations. Analytics can take many forms including data visualization, descriptive analysis, predictive alerts/ recommendations, dashboards, machine learning algorithms, cognitive computing etc. Analytics can occur at many levels of an organization from the boardroom to the shop floor. Of particular interest are not the specific applications or technical solutions but rather the adoption of these capabilities across the organization to transform decision making processes at scale, while the organizations themselves are being disrupted in the digital era. We encourage submissions that draw from diverse theoretical backgrounds such as engineering, computer science, decision science, creative visual design, organizational design and behavioral economics. We are open to a wide set of methodological approaches including empirical research, case-based research, field studies, behavioral decision making experiments, among others. We encourage collaboration between academia and industry and welcome diverse submissions by both industry and geography.
Some prospective topics include:
Value of Analytics
The Data Economy
Adoption of Advanced Analytics
For further information regarding this special issue, please email the Special Issue Editors:
Cognitive Assistance in Government and Public Sector Applications
November 9-11, 2017 Washington, DC
Cognitive Assistance is an important focus area for AI. While it has several facets and still lacks a precise definition (one of the reasons for this Symposium!), it has been called Augmented Intelligence, the automation of knowledge work, intelligence amplification, cognitive prostheses, and cognitive analytics in the past. It is generally agreed1 that even while fully automated AI is still being developed, there are many aspects in which people can (and do already) benefit from automated support, when it is appropriate and intelligently provided.
This symposium solicits innovative contributions to the research, development, and application of Cognitive Assistance technology for use in Government (executive agencies, legislative, and judicial branches), education, and healthcare. These areas differ considerably, but they all share characteristics that make them prime candidate application areas for Cognitive Assistance: complex knowledge interdependencies that take years to master, the situation where human experts provide support to less-informed clients with urgent needs, legal and social requirements for accurate and timely help.
This year we will expand the dialog between the user, academic, and industry communities to discuss the following topics:
1 It’s been noted that, “Humans will likely be needed to actively engage with AI technologies throughout the process of completing tasks [“Artificial Intelligence, Automation, and the Economy”, Executive Office of the President, December 2016].
We solicit ideas for and participation in panel discussions among public sector representatives to articulate their needs for and concerns about the use of cognitive assistance in their domains. We hope to also have panels with users and technologists exploring common problems faced by users, the opportunities for the cognitive assistant to assist, what information is available, and what would be measures of success for a solution.
We also invite students and researchers to propose demonstrations of state-of-the-art approaches to cognitive assistance technology and ideas relevant to the public sector.
The symposium will include presentations of accepted papers in both oral and panel discussion formats. Potential symposium participants are invited to submit either a full-length technical paper or a short position paper for discussion. Full-length papers must be no longer than eight (8) pages, including references and figures. Short submissions can be up to four (4) pages in length and describe speculative work, work in progress, system demonstrations, or panel discussions.
Please submit directly to Fstein@us.ibm.com with FSS-17 in the subject line. Please submit by July 21.
Organizing Committee: Frank Stein, IBM (Chair)
Lashon Booker, MITRE Chris Codella, IBM Eduard Hovy, CMU Chuck Howell, MITRE Anupam Joshi, UMBC Andrew Lacher, MITRE
Jim Spohrer, IBM John Tyler, IBM
Cheap AI/DL in the cloud leads to technology deflation driving costs out of service systems at all scales – so…. (a) AI will rapidly become a commodity decreasing the cost of nearly everything, (b) large companies will be transformed by automation and augmentation, (c) small companies will flourish and people will be involved in several simultaneously, and (d) individuals will learn to protect and monetize their personal data – so in the long-run, it will all be OK… everyone who wants to learn will be able to do so rapidly in a highly personalized way XD. This sweeping disruption of societal norms happened previously when the steam engine transformed physical work and re-ordered society (people moved from working on small farms to larger and larger factories, and people learned in schools set up like factories). Today, the cognitive engine is transforming mental work and will also profoundly re-order society (people move from one large company to many simultaneous startups, and people will learn in schools set up like startup incubators where students work in teams to solve real-world challenges because the building blocks are so cheap and powerful).
Event – Tuesday May 23, 2017:
Best address is 900 Bernal Road, San Jose, CA 95120 – head to guard station at top of hill and give your name as visitor. Host must register all visitors name, title, organization, and citizenship with IBM Security in advance of any visit. Visitor badges can be picked up at the reception desk, look for flagpole in front for visitor reception entrance. Best travel times: 30 minutes from San Jose Airport, 50 minutes from Stanford, 70 minutes from San Francisco Airport, 90 minutes from Berkeley – all depending on traffic conditions, double if heavy traffic.
I am trying to think hard about the 10 million minutes of experience that people go through to develop adult capabilities, as well as the 2 million minutes of experience from adult novice to adult expert – when people transition professions.
Mapping these developmental progressions into capabilities, and then capabilities into technologies is quite challenging – but fun.
Also, going the other direction from technological capabilities to specific applications, and from specific application to general capabilities of intelligence.
(1) Grand Challenges: General person at an age level -> development of general capabilities on tasks -> universal architecture – > open source technologies
(2) Practical Applications: Specific open source technologies -> specific capabilities for tasks/applications -> role in universal architecture
(3) Data sets for benchmarking performance improvements over time
(4) Rapidly rebuilding open source technologies from scratch -> booting up the universal architecture with minimal data/code (including synthetic data from simulations)
Part 1: Cognitive OpenTech Progress
Now consider the relative importance of big Data, Cloud compute power, and new Algorithms as a Service in making progress… we can call all these factors the DCAaaS drivers of progress.
In 2011, IBM Watson Jeopardy! victory on the TV quiz game show would not have been possible without the existence of Wikipedia – big data that was crowdsourced, and represents a compilation of knowledge across human history, including recent movies, sports events, political changes, and other current events as well as historic events. Wolfram had an interesting analysis on how close “brute force” approaches were coming to this type of Q&A task, based on compiled human knowledge or facts. In many ways this is an example of GOFAI – Good-Old-Fashion AI, with a twist. GOFAI includes “people built giant knowledge graphs,” such as ConceptNet. The modern twist that is now available, but was not available in the 1980’s, is crowdsourcing the construction of the “big data.”
In 2016, Google/DeepMind AlphaGo victory in the game Go would not have been possible without synthetic data, massive amounts of data generated by brute force simulated game playing. In 2017, CMU Libratus victory in poker (Texas-Hold-Em) was also dependent on big data from simulated game playing. Generating synthetic data sets based on foundational crowdsourced data sets has been key to many recent ImageNet Challenge annual performance improvements/victories. Additional “big data” that is synthetic data generated from crowdsourced data is a hot topic with OpenAI’s Universe project (background: generated data) as well.
Three speakers explain the importance of big Data, Cloud compute, and Algorithm advances as a Service (DCAaaS) or simple “better building blocks” – see:
|Andrej_Karpathy (OpenAI) https://www.youtube.com/watch?v=u6aEYuemt0M|
|Richard_Socher (Salesforce) https://www.youtube.com/watch?v=oGk1v1jQITw|
|Quoc V. Le (Google) https://www.youtube.com/watch?v=G5RY_SUJih4|
In addition to “big data” that is (1) crowdsourced, like Wikipedia and ImageNet, and (2) machine generated (“Synthetic Data”) as in AlphaGo, Libratus, and OpenAI Universe, each of us has a stock pile of (3) personal data on our computers, smartphones, social media accounts, etc.
Rhizome’s Blog has an interesting post about Web Recorder tool. Web Recorder is a tool for greatly expanding the amount of personal data, while also aggregating it as part of a type of internet archive and our personal browsing history of things we find interesting on the web. A type of collective, digital social memory is emerging.
In sum, more and better data, compute, and algorithms are fueling the rapid pace of Cognitive OpenTech developments.
Part 2: Grand Challenges of AI/CogSci Progress
A universal architecture for machine intelligence is beginning to emerge. The universal architecture that is emerging is a dynamic memory. Imagine a dynamic memory that stores and uses information to predict possible futures better and more energy efficiently than any processes known of in the past. This capability provides a type of episodic memory of text, pictures, and videos for question answering (see minute 50+ in the Socher video above). The dynamic memory includes both RNN (Recurrent Neural Net) models as well as large knowledge graph (as found in GOFAI) models for making inferences, and answering questions or making other types of appropriate actions.
What is a dynamic memory good for? Most of us have taken a standardized test with story questions. The test taker is asked to read a story, look at a sequence of pictures, or watch a video and then answer some simple questions. In grade school, these “story tests” are simple commonsense reasoning tasks, where the answer is always explicit in the story. As we get older, the stories get harder, inference is required beyond commonsense knowledge, tapping into “book learning” and “expert knowledge” that has been compiled for centuries. Some story questions we can answer based on short-term memory (STM), and others require long-term memory (LTM). A universal architecture that is a dynamic memory can combine appropriately both STM and LTM for question-answering.
|Context___________________________________||Right Ending||Wrong Ending|
|Gina misplaced her phone at her grandparents. It wasn’t anywhere in the living room. She realized she was in the car before. She grabbed her dad’s keys and ran outside.||She found her phone in the car.||She didn’t want her phone anymore.|
The example above is interesting, and the ConceptNet5 website FAQ (very end) reports: Natural language AI systems, including ConceptNet, have not yet surpassed 60% on this test.
As highlighted above in the Karpathy, Socher, Le videos – data in the form of sequences of text, sequences of images, as well as sections of videos (and audio recordings) – are all being used as input to tell simple stories. These stories (data) are snippettes of external reality representation – with some measure of internal model representation feedback loops – so are approaching a (1) experience representation and (2) episodic memory – what Schank called “dynamic memory” – that is beginning to be used in story processing and question-answering tasks – what Schank called “scripts, plans, goals, and understanding.”
The remaining grand challenge problems of AI/CogSci are being worked on by university, industry, and government research labs around the world, and rapid progress is expected, thanks in part to cognitive opentech – data, cloud (compute), and algorithms as a service offering, and very easy to access, including from smartphones that never leave our side as we operate in today’s world. The models being generated will have more and more universal applicability over time, and should boost the creativity and productivity of end-users who use these technologies to solve new and interesting problems, as advocated by Gary Kasparov. Kasparov, the world champion grand master player, lost chess games to Deep Blue in 1996 and again 1997. Today, noteworthy in the news, Gary Kasparov is now learning to love machine intelligence.
IA (Intelligence Augmentation) is a long-standing grand challenge that involves both people and machine intelligence together – thinking better together. IA is the key to what the NSF, JST, VTT, OECD and other organizations have started referring to as smarter/wiser service systems. IBM has made a lot of contributions to intelligence augmentation; Both intelligence augmentation and collaborative intelligence, will benefit the world.
The past, present, and future of measuring AI progress is becoming an important area of research.
While leading IBM Global University Programs for seven years, my team and I developed the 6 R’s of industry-university programs:
These 6 R’s are described with examples in this presentation and this paper:
Click on the links above to download the presentation and paper.
Done well, the 6 R’s can help boost an all important 7th R = Reputation, or brand of the company when working with universities.
1.Csikszentmihalyi M (1990) Flow: The pyschology of optimal experience. NY: Harper.
2.Hendy S, Callaghan P (2013) Get off the grass: kickstarting New Zealand’s innovation economy. Auckland University Press.
Review 1: http://www.noted.co.nz/archive/listener-nz-2013/book-review-get-off-the-grass-by-shaun-hendy-and-paul-callaghan/
Review 2: http://sciblogs.co.nz/griffins-gadgets/2013/08/16/review-get-off-the-grass/
Weta Digital: https://en.wikipedia.org/wiki/Weta_Digital
3.Anderson JC, Kumar N, Narus JA (2007) Value merchants: Demonstrating and documenting superior value in business markets. Harvard Business Press.
4.Johnson C, Lusch R, Schmidtz D (2016) Ethics, Economy, and Entrepreneurship. SagentsLab.
5.Wright R (209) The evolution of God. Boston: Back Bay Books.
1. Gigerenzer G (2010) Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in cognitive science. 2(3):528-54.
Pat Langley suggested the above.
The author wrote: Herbert Simon once told me that he wanted to sue people who misuse his concept for another form of optimization.
2.Andrej Karpathy, Deep Learning
This video: https://www.youtube.com/watch?v=u6aEYuemt0M
This tool on arXiv is especially cool: http://arxiv-sanity.com/
All created by this guy, Andrej Karpathy: https://www.linkedin.com/in/andrej-karpathy-9a650716/
3.Richard Socher, Deep Learning for NLP
This video: https://www.youtube.com/watch?v=oGk1v1jQITw
4. Rhizome Web Recorder – personal perspective and collections (via Vint Cerf)
5. Gary Kasparov
Gary Kasparov, first world champion grand master of chess to lose to a machine (IBM Deep Blue)
Two years after the loss, he invented and started writing about Freestyle chess – people playing better chess, even competition matches, with their computer buddy.
He has gotten over it, and reframes the discussion: https://www.wsj.com/articles/learning-to-love-intelligent-machines-1492174086
We can do new things together that were never before possible… thinking better together.
6. Physics and Biology – understanding when history matters.
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that…
So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium.
England’s definition of “adaptation” is closer to Schrödinger’s, and indeed to Maxwell’s: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps her footing on a pitching ship while others fall over because she’s better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.
You might say that the system of particles experiences a kind of urge to preserve freedom of future action, and that this urge guides its behavior at any moment. The researchers who developed the model — Alexander Wissner-Gross at Harvard University and Cameron Freer, a mathematician at the Massachusetts Institute of Technology — call this a “causal entropic force.” In computer simulations of configurations of disk-shaped particles moving around in particular settings, this force creates outcomes that are eerily suggestive of intelligence.
In other words, the appearance of life on a planet like the early Earth, imbued with energy sources such as sunlight and volcanic activity that keep things churning out of equilibrium, starts to seem not an extremely unlikely event, as many scientists have assumed, but virtually inevitable.
7. Todd Kelsey
RGB book – Health, Environment, Community – http://www.rgbexchange.org/book
Stock exchange for non-profits idea
DigitalArcheology – Vint Cerf Rhizome – WebRecorder Link: https://www.youtube.com/watch?v=n3SqusABXEk
Todd Kelsey wrote:
AI might be capable of somehow communicating these things to a new generation
a great grandchild who never met their ancestor might have an opportunity to learn from them and be mentored by them
Yes, termed “weak immortality” by Doug Lenat’s this AI Magazine issue:
Search for “Weak Immortality” – it will arrive around 2035 with cognitive mediators who know us, in some ways, better than we know ourselves.
You might also enjoy how these systems seems to be evolving:
The ability to rapidly rebuild from scratch seems to be an attractor in energy/knowledge rich systems – for example, seed to tree, planet to life, etc.
1.Henry Chesbrough/Solomon Darwin (Chief Innovation Officer) discussion – April 19, 2017.
My three questions for Henry to ask are:
(1) What will be the concrete outcomes from the Partnership on AI?
(2) When can we expect the unsolved challenges to be completed?
(3) What can individuals to to ensure benefits achieved, and social challenges mitigated?
My three points would be:
– IBM and the industry (Google, Microsoft, Amazon, Apple, etc.) are working on thematic pillars together:
– Building blocks getting better, but unsolved grand challenges remain
– deep learning breakthroughs by Hinton (U Toronto) 2012 for images and speech
– hard unsolved before super-human general intelligence will be achieved
– commonsense reasoning, social interactions
– fluent conversation, ingest textbooks, creative collaboration
– Benefits and challenges for people, business, and society along the way…
– gigantic boost in productivity (what would you do with 100 digital workers working for you?)
– Baylor Biochemical Engineering example
– driverless cars can become guided missiles and/or smart enough not to be used to drive over people
– we already have super-intelligence, called corporations and governments
– they cannot always be held accountable for doing stupid/bad things
– 2008 financial meltdown
2. After meeting Per-Ane Lundberg (Sweden, Science Cluster, Gaming):
Here is the video that gives a good sense of open, sharing, and the future – see the 90 minute video at the bottom of this blog:
Here is the blog of Zanker Recycling with pictures: http://service-science.info/archives/4525
Los Esteros Rd, San Jose, CA 95134
For tours contact: Michael Gross <firstname.lastname@example.org>, Jerame Renteria <email@example.com>
You also should meet the folks at Cogswell for gaming and movie making: https://cogswell.edu/
191 Baypointe Pkwy, San Jose, CA 95134
They have an interesting history: https://en.wikipedia.org/wiki/Cogswell_Polytechnical_College
Solomon Darwin about his upcoming events, and their invitation list –
Henry Chesbrough is also a good colleague – father of open innovation – and he keynoted at this NSF workshop we ran a couple weeks ago:
The key to the future of regions is energy-independence plans, things get easy after that – water is just energy, so is food, shelter, recycling, circular-economy….. all depends on energy… geo-thermal is probably the way to go in most places, and the artificial leaf is also very good, and nuclear will make a smaller comeback too. http://service-science.info/archives/4463
See slide #9 for a summary of 2035-ish: https://www.slideshare.net/spohrer/silicon-vikings-20170307-v2
Also important to understand technology deflation and macroeconomics of printing money and taxes – Sweden does it, so do all major countries:
Technology deflation is real – and the smart phone is a great example of it in action – a black hole absorbing all that is digital – nicely shrinking costs.
3. AI Hardware and things to play with online:
indeed.com tools: http://www.kdnuggets.com/2017/01/most-popular-language-machine-learning-data-science.html
Play with Google Trends more: https://medium.com/@karpathy/a-peek-at-trends-in-machine-learning-ab8a1085a106
ISSIP.org and Japan Science and Technology (JST) co-created a workshop and here is the report:
Naohumi,Yamada Center for R&D Strategy, JST
E-Mail firstname.lastname@example.org http://www.jst.go.jp/crds