Join discussions in order to build understanding of concepts in service science. Here is our curriculum guide.
Follow Jim (@JimSpohrer) on Twitter
About this site & registering.
Join discussions in order to build understanding of concepts in service science. Here is our curriculum guide.
Follow Jim (@JimSpohrer) on Twitter
About this site & registering.
Cheap AI/DL in the cloud leads to technology deflation driving costs out of service systems at all scales – so…. (a) AI will rapidly become a commodity decreasing the cost of nearly everything, (b) large companies will be transformed by automation and augmentation, (c) small companies will flourish and people will be involved in several simultaneously, and (d) individuals will learn to protect and monetize their personal data – so in the long-run, it will all be OK… everyone who wants to learn will be able to do so rapidly in a highly personalized way XD. This sweeping disruption of societal norms happened previously when the steam engine transformed physical work and re-ordered society (people moved from working on small farms to larger and larger factories, and people learned in schools set up like factories). Today, the cognitive engine is transforming mental work and will also profoundly re-order society (people move from one large company to many simultaneous startups, and people will learn in schools set up like startup incubators where students work in teams to solve real-world challenges because the building blocks are so cheap and powerful).
Event – Tuesday May 23, 2017:
Best address is 900 Bernal Road, San Jose, CA 95120 – head to guard station at top of hill and give your name as visitor. Host must register all visitors name, title, organization, and citizenship with IBM Security in advance of any visit. Visitor badges can be picked up at the reception desk, look for flagpole in front for visitor reception entrance. Best travel times: 30 minutes from San Jose Airport, 50 minutes from Stanford, 70 minutes from San Francisco Airport, 90 minutes from Berkeley – all depending on traffic conditions, double if heavy traffic.
I am trying to think hard about the 10 million minutes of experience that people go through to develop adult capabilities, as well as the 2 million minutes of experience from adult novice to adult expert – when people transition professions.
Mapping these developmental progressions into capabilities, and then capabilities into technologies is quite challenging – but fun.
Also, going the other direction from technological capabilities to specific applications, and from specific application to general capabilities of intelligence.
(1) Grand Challenges: General person at an age level -> development of general capabilities on tasks -> universal architecture – > open source technologies
(2) Practical Applications: Specific open source technologies -> specific capabilities for tasks/applications -> role in universal architecture
(3) Data sets for benchmarking performance improvements over time
(4) Rapidly rebuilding open source technologies from scratch -> booting up the universal architecture with minimal data/code (including synthetic data from simulations)
Part 1: Cognitive OpenTech Progress
Now consider the relative importance of big Data, Cloud compute power, and new Algorithms as a Service in making progress… we can call all these factors the DCAaaS drivers of progress.
In 2011, IBM Watson Jeopardy! victory on the TV quiz game show would not have been possible without the existence of Wikipedia – big data that was crowdsourced, and represents a compilation of knowledge across human history, including recent movies, sports events, political changes, and other current events as well as historic events. Wolfram had an interesting analysis on how close “brute force” approaches were coming to this type of Q&A task, based on compiled human knowledge or facts. In many ways this is an example of GOFAI – Good-Old-Fashion AI, with a twist. GOFAI includes “people built giant knowledge graphs,” such as ConceptNet. The modern twist that is now available, but was not available in the 1980’s, is crowdsourcing the construction of the “big data.”
In 2016, Google/DeepMind AlphaGo victory in the game Go would not have been possible without synthetic data, massive amounts of data generated by brute force simulated game playing. In 2017, CMU Libratus victory in poker (Texas-Hold-Em) was also dependent on big data from simulated game playing. Generating synthetic data sets based on foundational crowdsourced data sets has been key to many recent ImageNet Challenge annual performance improvements/victories. Additional “big data” that is synthetic data generated from crowdsourced data is a hot topic with OpenAI’s Universe project (background: generated data) as well.
Three speakers explain the importance of big Data, Cloud compute, and Algorithm advances as a Service (DCAaaS) or simple “better building blocks” – see:
|Andrej_Karpathy (OpenAI) https://www.youtube.com/watch?v=u6aEYuemt0M|
|Richard_Socher (Salesforce) https://www.youtube.com/watch?v=oGk1v1jQITw|
|Quoc V. Le (Google) https://www.youtube.com/watch?v=G5RY_SUJih4|
In addition to “big data” that is (1) crowdsourced, like Wikipedia and ImageNet, and (2) machine generated (“Synthetic Data”) as in AlphaGo, Libratus, and OpenAI Universe, each of us has a stock pile of (3) personal data on our computers, smartphones, social media accounts, etc.
Rhizome’s Blog has an interesting post about Web Recorder tool. Web Recorder is a tool for greatly expanding the amount of personal data, while also aggregating it as part of a type of internet archive and our personal browsing history of things we find interesting on the web. A type of collective, digital social memory is emerging.
In sum, more and better data, compute, and algorithms are fueling the rapid pace of Cognitive OpenTech developments.
Part 2: Grand Challenges of AI/CogSci Progress
A universal architecture for machine intelligence is beginning to emerge. The universal architecture that is emerging is a dynamic memory. Imagine a dynamic memory that stores and uses information to predict possible futures better and more energy efficiently than any processes known of in the past. This capability provides a type of episodic memory of text, pictures, and videos for question answering (see minute 50+ in the Socher video above). The dynamic memory includes both RNN (Recurrent Neural Net) models as well as large knowledge graph (as found in GOFAI) models for making inferences, and answering questions or making other types of appropriate actions.
What is a dynamic memory good for? Most of us have taken a standardized test with story questions. The test taker is asked to read a story, look at a sequence of pictures, or watch a video and then answer some simple questions. In grade school, these “story tests” are simple commonsense reasoning tasks, where the answer is always explicit in the story. As we get older, the stories get harder, inference is required beyond commonsense knowledge, tapping into “book learning” and “expert knowledge” that has been compiled for centuries. Some story questions we can answer based on short-term memory (STM), and others require long-term memory (LTM). A universal architecture that is a dynamic memory can combine appropriately both STM and LTM for question-answering.
|Context___________________________________||Right Ending||Wrong Ending|
|Gina misplaced her phone at her grandparents. It wasn’t anywhere in the living room. She realized she was in the car before. She grabbed her dad’s keys and ran outside.||She found her phone in the car.||She didn’t want her phone anymore.|
The example above is interesting, and the ConceptNet5 website FAQ (very end) reports: Natural language AI systems, including ConceptNet, have not yet surpassed 60% on this test.
As highlighted above in the Karpathy, Socher, Le videos – data in the form of sequences of text, sequences of images, as well as sections of videos (and audio recordings) – are all being used as input to tell simple stories. These stories (data) are snippettes of external reality representation – with some measure of internal model representation feedback loops – so are approaching a (1) experience representation and (2) episodic memory – what Schank called “dynamic memory” – that is beginning to be used in story processing and question-answering tasks – what Schank called “scripts, plans, goals, and understanding.”
The remaining grand challenge problems of AI/CogSci are being worked on by university, industry, and government research labs around the world, and rapid progress is expected, thanks in part to cognitive opentech – data, cloud (compute), and algorithms as a service offering, and very easy to access, including from smartphones that never leave our side as we operate in today’s world. The models being generated will have more and more universal applicability over time, and should boost the creativity and productivity of end-users who use these technologies to solve new and interesting problems, as advocated by Gary Kasparov. Kasparov, the world champion grand master player, lost chess games to Deep Blue in 1996 and again 1997. Today, noteworthy in the news, Gary Kasparov is now learning to love machine intelligence.
IA (Intelligence Augmentation) is a long-standing grand challenge that involves both people and machine intelligence together – thinking better together. IA is the key to what the NSF, JST, VTT, OECD and other organizations have started referring to as smarter/wiser service systems. IBM has made a lot of contributions to intelligence augmentation; Both intelligence augmentation and collaborative intelligence, will benefit the world.
The past, present, and future of measuring AI progress is becoming an important area of research.
While leading IBM Global University Programs for seven years, my team and I developed the 6 R’s of industry-university programs:
These 6 R’s are described with examples in this presentation and this paper:
Click on the links above to download the presentation and paper.
Done well, the 6 R’s can help boost an all important 7th R = Reputation, or brand of the company when working with universities.
1.Csikszentmihalyi M (1990) Flow: The pyschology of optimal experience. NY: Harper.
2.Hendy S, Callaghan P (2013) Get off the grass: kickstarting New Zealand’s innovation economy. Auckland University Press.
Review 1: http://www.noted.co.nz/archive/listener-nz-2013/book-review-get-off-the-grass-by-shaun-hendy-and-paul-callaghan/
Review 2: http://sciblogs.co.nz/griffins-gadgets/2013/08/16/review-get-off-the-grass/
Weta Digital: https://en.wikipedia.org/wiki/Weta_Digital
3.Anderson JC, Kumar N, Narus JA (2007) Value merchants: Demonstrating and documenting superior value in business markets. Harvard Business Press.
4.Johnson C, Lusch R, Schmidtz D (2016) Ethics, Economy, and Entrepreneurship. SagentsLab.
5.Wright R (209) The evolution of God. Boston: Back Bay Books.
1. Gigerenzer G (2010) Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in cognitive science. 2(3):528-54.
Pat Langley suggested the above.
The author wrote: Herbert Simon once told me that he wanted to sue people who misuse his concept for another form of optimization.
2.Andrej Karpathy, Deep Learning
This video: https://www.youtube.com/watch?v=u6aEYuemt0M
This tool on arXiv is especially cool: http://arxiv-sanity.com/
All created by this guy, Andrej Karpathy: https://www.linkedin.com/in/andrej-karpathy-9a650716/
3.Richard Socher, Deep Learning for NLP
This video: https://www.youtube.com/watch?v=oGk1v1jQITw
4. Rhizome Web Recorder – personal perspective and collections (via Vint Cerf)
5. Gary Kasparov
Gary Kasparov, first world champion grand master of chess to lose to a machine (IBM Deep Blue)
Two years after the loss, he invented and started writing about Freestyle chess – people playing better chess, even competition matches, with their computer buddy.
He has gotten over it, and reframes the discussion: https://www.wsj.com/articles/learning-to-love-intelligent-machines-1492174086
We can do new things together that were never before possible… thinking better together.
6. Physics and Biology – understanding when history matters.
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that…
So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium.
England’s definition of “adaptation” is closer to Schrödinger’s, and indeed to Maxwell’s: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps her footing on a pitching ship while others fall over because she’s better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.
You might say that the system of particles experiences a kind of urge to preserve freedom of future action, and that this urge guides its behavior at any moment. The researchers who developed the model — Alexander Wissner-Gross at Harvard University and Cameron Freer, a mathematician at the Massachusetts Institute of Technology — call this a “causal entropic force.” In computer simulations of configurations of disk-shaped particles moving around in particular settings, this force creates outcomes that are eerily suggestive of intelligence.
In other words, the appearance of life on a planet like the early Earth, imbued with energy sources such as sunlight and volcanic activity that keep things churning out of equilibrium, starts to seem not an extremely unlikely event, as many scientists have assumed, but virtually inevitable.
7. Todd Kelsey
RGB book – Health, Environment, Community – http://www.rgbexchange.org/book
Stock exchange for non-profits idea
DigitalArcheology – Vint Cerf Rhizome – WebRecorder Link: https://www.youtube.com/watch?v=n3SqusABXEk
Todd Kelsey wrote:
AI might be capable of somehow communicating these things to a new generation
a great grandchild who never met their ancestor might have an opportunity to learn from them and be mentored by them
Yes, termed “weak immortality” by Doug Lenat’s this AI Magazine issue:
Search for “Weak Immortality” – it will arrive around 2035 with cognitive mediators who know us, in some ways, better than we know ourselves.
You might also enjoy how these systems seems to be evolving:
The ability to rapidly rebuild from scratch seems to be an attractor in energy/knowledge rich systems – for example, seed to tree, planet to life, etc.
1.Henry Chesbrough/Solomon Darwin (Chief Innovation Officer) discussion – April 19, 2017.
My three questions for Henry to ask are:
(1) What will be the concrete outcomes from the Partnership on AI?
(2) When can we expect the unsolved challenges to be completed?
(3) What can individuals to to ensure benefits achieved, and social challenges mitigated?
My three points would be:
– IBM and the industry (Google, Microsoft, Amazon, Apple, etc.) are working on thematic pillars together:
– Building blocks getting better, but unsolved grand challenges remain
– deep learning breakthroughs by Hinton (U Toronto) 2012 for images and speech
– hard unsolved before super-human general intelligence will be achieved
– commonsense reasoning, social interactions
– fluent conversation, ingest textbooks, creative collaboration
– Benefits and challenges for people, business, and society along the way…
– gigantic boost in productivity (what would you do with 100 digital workers working for you?)
– Baylor Biochemical Engineering example
– driverless cars can become guided missiles and/or smart enough not to be used to drive over people
– we already have super-intelligence, called corporations and governments
– they cannot always be held accountable for doing stupid/bad things
– 2008 financial meltdown
2. After meeting Per-Ane Lundberg (Sweden, Science Cluster, Gaming):
Here is the video that gives a good sense of open, sharing, and the future – see the 90 minute video at the bottom of this blog:
Here is the blog of Zanker Recycling with pictures: http://service-science.info/archives/4525
Los Esteros Rd, San Jose, CA 95134
For tours contact: Michael Gross <email@example.com>, Jerame Renteria <firstname.lastname@example.org>
You also should meet the folks at Cogswell for gaming and movie making: https://cogswell.edu/
191 Baypointe Pkwy, San Jose, CA 95134
They have an interesting history: https://en.wikipedia.org/wiki/Cogswell_Polytechnical_College
Solomon Darwin about his upcoming events, and their invitation list –
Henry Chesbrough is also a good colleague – father of open innovation – and he keynoted at this NSF workshop we ran a couple weeks ago:
The key to the future of regions is energy-independence plans, things get easy after that – water is just energy, so is food, shelter, recycling, circular-economy….. all depends on energy… geo-thermal is probably the way to go in most places, and the artificial leaf is also very good, and nuclear will make a smaller comeback too. http://service-science.info/archives/4463
See slide #9 for a summary of 2035-ish: https://www.slideshare.net/spohrer/silicon-vikings-20170307-v2
Also important to understand technology deflation and macroeconomics of printing money and taxes – Sweden does it, so do all major countries:
Technology deflation is real – and the smart phone is a great example of it in action – a black hole absorbing all that is digital – nicely shrinking costs.
3. AI Hardware and things to play with online:
indeed.com tools: http://www.kdnuggets.com/2017/01/most-popular-language-machine-learning-data-science.html
Play with Google Trends more: https://medium.com/@karpathy/a-peek-at-trends-in-machine-learning-ab8a1085a106
ISSIP.org and Japan Science and Technology (JST) co-created a workshop and here is the report:
Naohumi,Yamada Center for R&D Strategy, JST
E-Mail email@example.com http://www.jst.go.jp/crds
I have been asked to present an industry perspective on the need for T-shaped skills at a National Academy of Sciences (NAS) workhop on “STEM Integration Into The Liberal Arts.”
Here is my presentation: https://www.slideshare.net/spohrer/nas-integrated-education-20170406-v7
Here is a podcast (Dave Goldberg, Big Beacon/Whole New Engineer): https://www.voiceamerica.com/episode/98248/exploring-service-science-and-cognitive-systems-an-interview-with-jim-spohrer
In industry, we see the need for business, engineering, social scientists, communications, and legal/policy people to work together and interact a lot on nearly every project. For example, when IBM communications department posted this about Quantum Computing advances http://research.ibm.com/ibm-q/ – behind the scenes, all the above and more had to interact. In the age of accelerations, this becomes more and more true – just think of driverless cars and Uber – if you like controversies.
A range of people study or work to improve this “need for diverse interactions.”
Some call it interactional expertise: https://en.wikipedia.org/wiki/Interactional_expertise
Some call it T-shapes and empathy: http://chiefexecutive.net/ideo-ceo-tim-brown-t-shaped-stars-the-backbone-of-ideoae%E2%84%A2s-collaborative-culture/
At IBM, we like T-shapes, and the diversity is not just disciplines, but systems,and cultures as well: http://service-science.info/archives/3328
Of course, integrated education is hard to design – it is hard for a single person to learn just one discipline or area deeply, but to learn several requires a polymath – and too often they are broad not deep, good for connecting, but less so for solving problems that require deep understanding: https://www.wired.com/2013/12/165191/
To achieve integrated education will require a vision and a process, a process that works just a little bit, year over year to eventually get closer and closer to the goal of integrated education – I have called this a Moore’s law for education (not sure if you can access Example 1 on education here http://cacm.acm.org/magazines/2006/7/5871-service-systems-service-scientists-ssme-and-innovation/fulltext) – and the details have been part of service science. Simply put, service science is about improving our ability to play win-win or positive sum games through a better understanding of socio-technical system evolution and design. Socio-technical systems of people and technology interconnected by value propositions are known as service systems. The need for T-shaped people to advance service science and service innovation was written about in this report from University of Cambridge some years ago – notice figure 1 on the gaps beween academic disciplines especially: http://www.ceri.msu.edu/wp-content/uploads/2010/06/Cambridge_T-Shaped.pdf
Friedman TL (2016) Thank-you for being late: An optimist’s guide to thriving in the age of accelerations. NY: Farrae, Stauss, Grioux.
Also watch and listen to these two items:
Jim Corgel on the Right Attitude and Skill Set https://www.youtube.com/watch?v=Gm3cqVuOqMQ
SYSK Empathy: http://www.stuffyoushouldknow.com/podcasts/empathy.htm
For Ashley Bear.
(Arizona State University; Life Sciences Building Room C202; C Wing 401 E. Tyler Mall; Tempe, Arizona; Thursday, April 6th, 2017)
The key question:
Who may share what information with whom for what purpose in what context and be on firm legal ground?
(1) Who may – in the digital age we can all capture and share information more easily.
(2) what information- even information that is captured digitally can be altered, or opinions can be associated with it.
(3) with whom – sometimes it is OK to share information with some people, but not others.
(4) purpose – the intention of the sharing may be important, in many cases
(5) context – this may override other factors, such as in a disaster situation – when normal rules do not apply.
(6) on firm legal ground – this is what title companies do for real property.
Who offers title insurance on real property and how did that come to be?
Who offers title insurance on information? UNKNOWN
Information Provenance: http://itlaw.wikia.com/wiki/Information_provenance
IBM Knowledge Center: Provenance information
The world is a messy place with other people’s data and property all over the place – Google ran into this with Glass –
Scroll down to Privacy Concerns: https://en.wikipedia.org/wiki/Google_Glass
Quote: Additionally, there is controversy that Google Glass would cause security problems and violate privacy rights. Organizations like the FTC Fair Information Practice work to uphold privacy rights through Fair Information Practice Principles (FIPPS), which are guidelines representing concepts that concern fair information practice in an electronic marketplace.
Quote: 4. Integrity/Security Information collectors should ensure that the data they collect is accurate and secure. They can improve the integrity of data by cross-referencing it with only reputable databases and by providing access for the consumer to verify it. Information collectors can keep their data secure by protecting against both internal and external security threats. They can limit access within their company to only necessary employees to protect against internal threats, and they can use encryption and other computer-based security systems to stop outside threats.
For commercial and legal purposes, every time we share digital information via email, social media, etc., we may want to have a record of the sharing event which is automatically added to a blockchain. Is this practical? desireable? viable? Questions: Should we, can we, may we, will we?
What if the person we share the information with – does not read it, process it, understand it? What if they don’t want to receive it?
Some additional readings:
Benefits that search engines get – users must actively opt out:
What is a “busy fee earner”? Read this here.