Stephen Wolfram has given notice that “Something Very Big Is Coming" with the introduction of a new symbolic programming language.
In essence, this new language has the potential to allow a natural expression of intent or action. With other computer languages, a great deal of work and thought is required to build up data structures in particular formats so the computer can “process” things or offer users the ability to manipulate or visualize data of specific types. Hence we have separate word processing and image processing and music software. We have separate spreadsheet and database tools. The work of building all these data-type specific software tools has required specialized skills: programming, database administration, user interface design. The Wolfram Language may wrap the lower-level implementation of these computer-oriented tasks in an interpreted higher-level language which just does what we tell it to do. The words of the language become commands for action.
See Spot Run
Imagine the creative possibilities of a language which could take “See Spot Run” as a command to generate an animation of a running dog. Such language already works in our heads. The input triggers processes which cause us to “see” (visualize or imagine) that “Spot”, a dog, is moving quickly (running). Computer languages typically don’t have this expressive power due to ambiguities of meaning. “See Spot Run” could also be a complaint you make to your dry cleaner about the ketchup stain on your tie: Notice Stain Grew-larger.
The whole point of the first computer programs was to calculate the answer. IBM’s Watson has shown how a computer can play the game Jeopardy! by entertaining many possible answers and, effectively, guessing at which one is best. The guesses may not always be correct, but the “thinking” behind the answers has the advantage of being fast, vast and at least somewhat transparent. When the system “explains” how it arrived at the results, misunderstandings can be clarified. For example, with the new Wolfram Language, if “See Spot Run” is too ambiguous then “See Dog Run” might resolve the ambiguity.
MOOCs or MOOQS?
Ubiquitous computing — the idea that we can now have unlimited, any time, anywhere access to computing resources — implies learning experiences could be atomized. We might not need a course in anything if we can get answers to all our questions on demand. Thus, the potential exists for Massively Open Online Courses (MOOCs) to be replaced by Massively Open Online Question Systems (MOOQS). Google Search and Stackoverflow are two examples, both of which depend on some human having worked out and posted an answer for others to find. The Wolfram Language may ultimately be able to interpolate between human-generated answers, making it possible to ask the system novel questions which tap into the language’s algorithms to compute novel answers. Such computing promises “intelligence amplification” — rather than “artificial intelligence” — as it offers convenient and natural access by humans to algorithmic power without requiring artificial expression of the intent in computer code.
Steve Kolowich’s piece “The MOOC ‘Revolution’ May Not Be as Disruptive as Some Had Imagined” in the The Chronicle of Higher Education left me flabbergasted. The article reflects institutional inertia at its worst. Change is coming. The questions are when? and for whom?
To understand what the word “disruptive” really means, consider this statistical table U.S. Railroad Employment Statistics (1947-Present) and this PDF of Motor Vehicle-related employment (1990-2013), and Employment of Teachers and Software Developers, May 2012.
Railroads: 85% drop in employment
Motor Vehicles: 53% drop in employment peak to trough
Millions of teachers. Under a million software developers. For now.
Whether you think traditional educational systems are like railroads or motor vehicle manufacturing, new MOOC-like systems are going to “transport” some learners to their learning destinations in new ways. That scenario becomes disruptive when teachers transition from “driving trains” to “building cars” and students transition from “riding trains” to “driving cars”. Looked at from this perspective, some form of disruption appears highly likely in coming years — very possibly involving a lot more personal “travel”.
Who will go on these new, digitally-enabled learning journeys? The “disruptive” changes may take place far from traditional educational settings. If Twitter “disrupted” politics in the Arab Spring, imagine what MOOCs might do in other countries.
Anecdotally, I just saw a woman wearing a full, black burqa — her face completely covered — typing away intently on her white iPhone …
At Kiwi PyCon 2009, I gave a presentation titled Voice Interaction in Python to the breakout room in Christchurch. As I recall, 2 people were in the room. I put the talk on Slideshare, however, painstakingly adding an audio voice over recording and syncing it to the slides. Four years later, that talk has been viewed over 7,000 times!
At Kiwi PyCon 2011, SlideSpeech was still called Wiki-to-Speech, but the intrepid Ben Healey managed to add speaker notes to his slides to make a talking version of his talk, Document Classification using the Natural Language Toolkit, using a computer voice. Here’s the video version with over 1,000 views.
From Audrey Roy, we had Python and the Web: Can we keep up?.
Jeff Rush contributed The Magic of Metaprogramming.
And Glenn Ramsey gave us Design Patterns in Python.
Those three presentations took their scripts from notes I “transcribed” during the talks.
A few months later, in February 2012, SlideSpeech Limited was founded and the open source Python project served as the prototype for the Java-based presentation converter service available today at http://slidespeech.com. You can learn more about the SlideSpeech system from the FAQ.
In the last two weeks, I’ve learned enough about two languages I had never worked with before, Go and Dart, that I was able to write and deploy a Go-based web application called Assignment Sheet Builder, running on Google AppEngine, and use it to begin to create an online “course” about Dart.
I would like to persuade you that the combination of SlideSpeech + Assignment Sheet Builder, or something similar, could be useful for sharing the talks and workshops of Kiwi PyCon 2013 for the online audience.
There are several ways to put voice over audio together with slides using SlideSpeech.
- The simplest and best is to type the script of the voice over in the speaker notes of your slides. This is simple because it involves word processing instead of public speaking + audio recording/editing. In my time trials, it cuts the time of making a talking presentation in half relative to recording and editing audio or video. The text-based approach is best because the resulting script can be updated and improved at any time like a wiki page and is searchable (like a Wikipedia article).
- If you want more control over the voice, you can use the SlideSpeech plugin for LibreOffice to drive the voice on your PC or Mac while making a screen cast of your talk.
- If you want a human voice recording, Audacity allows chopping an audio recording into pieces based on labels (the feature is under File > Export Multiple …) and SlideSpeech allows uploading these, one by one, onto your slides. The results sound like this (46 slides, 1 hour long).
It would be great if Kiwi PyCon — which contributed so much to my learning over the past 4 years at AUT — could be the first SlideSpeech-ified conference, with a fully searchable set of conference presentations.
What do you say?
Rory O’Brien presented on Rapid eLearning in deMOOC from New South Wales, Australia. Lesley Gardner presented “Where has all the noise gone" from University of Auckland in New Zealand. Students in Mr. Salmon’s 3rd and 4th grade classes presented on Plants from Edmonton, Alberta, Canada. But wait! These presentations were made around the world, yet I watched them all from my mobile phone at home!
The vision of a planet united in learning, any time, any where, is real. We needed the ability to freely share our knowledge. Now we have it. The next step is relatively simple: linking different pieces of learning material together. As I suggested to Mr. Salmon, if his students had each worked on one aspect of plants, instead of each repeating the same (or similar) presentation, the collection of presentations could become a whole unit on plants for other students to study. This is how Wikipedia collaboratively built up into the largest reference work on the planet. We can now achieve the same scale with learning materials. I call this Socially Responsible Learning, where the output of learning becomes the input for other learners.
If you are as excited as I am about this prospect, please join the Google+ community called Using Google Apps as a Free Learning Management System which you can read more about here.