A couple hundred years ago, higher education was not something that could be sought by anyone. Only rich white males could ever hope to continue on in academia past basic literacy and arithmetic. Fortunately this is no longer the case. Now, people from all backgrounds can attend through grade 12 for free and can apply to colleges via need-blind application processes that assure a more level playing field for those seeking traditional higher education. But even now, there are those that think higher education should abandon the brick and mortar edifices that have housed great libraries for centuries. They suggest that knowledge could be disseminated much more efficiently through technological means, and with the price of an elite Yale education currently sitting at $200,000 (and rising quickly), I have to admit that I agree with them.
The Access to Knowledge (A2K) movement is a recent movement that has sought to reform the way people think about education. According to its proponents, education is not something that should be denied to anyone. In fact, it’s put on the same level of importance as justice and freedom, and with education‘s profound effect on the economic development of a country, this seems reasonable. After all, if a developing country is dependent on foreign aid for educated reformers and leaders, then how can the country hope to become self-sustaining?
Some pretty high profile universities are jumping on this A2K bandwagon, and the results are pretty astounding. MIT OpenCourseWare was launched in 2002, yet already has over 2000 courses online, 46 of which have complete video lecture series. These courses have been visited over 146 million times by more than 104 million viewers worldwide. Yale followed suit in 2007 launching Open Yale Courses with 7 full video lecture series. Since then the library of video lectures has grown to include 42 courses and is still rapidly expanding. Other programs that aren’t linked to universities include Khan Academy, which boasts a large collection of over 3600 videos explaining high school and college topics.
With the vast amount of resource online, many people are now seeking some sort of proof that they have completed online courses. Many users want some sort of certificate that they can show to potential employers to show that they have mastered certain material and others seek credit at the institution that provided the open courseware, but open course providers are reluctant to offer any sort of accreditation. Administrators insist that these programs are designed to help with the University’s goal of disseminating knowledge, but are in no way meant to serve as a duplicate to a Yale or MIT education.
As a student at Yale, I would have to agree with the administration of OYC that credit at the provider institution would be unearned. The course material is an important part of a Yale education, but in no way is it the only part. For example, students who learn via open courseware are able to view videos of lectures and complete assignments, but it would be nigh on impossible for professors to grade and provide feedback for the millions of people that are viewing the courses. Furthermore, an education at an elite institution is about more than just the knowledge you acquire in class. A degree from an institution such as Yale or MIT signifies not only that you are extremely knowledgeable in your field of study, but also that you have spent 4 years cultivating a new way of thinking by engaging with some of the brightest professors and students on the planet. This isn’t to say that interaction among peers is impossible via an online course, but it certainly isn’t the same as what you would experience on an elite college campus.
This being said, it is still possible that some sort of certificate could be presented to those users who demonstrate an adequate knowledge of the subject via an online test (that could feasibly be graded electronically), but the value of this certificate would have to be decided by potential employers. It wouldn’t be the same as a degree earned at an old-fashioned brick-and-mortar institution, but someone who had already proved their aptitude in, say, applied mathematics, would certainly be more valuable than someone who hadn’t done so. And when you consider the possibilities of open courseware in the developing world it once again seems worthy of high praise.
With the high cost of textbooks and the added cost of transportation of those textbooks it seems unreasonable to even suggest transporting small libraries to the developing world, much hiring world-renowned staff to go give lectures there. However, open courseware makes a similar reality much more plausible. Rather than purchasing hundreds or thousands of books, schools in the developing world could instead purchase a small handful of computers for a fraction of the cost. These computers could support satellite internet connections and thus circumvent the need for infrastructure improvements. Most open courseware was created in English, but many open courseware resources are being translated into multiple languages, so it is likely that students will be able to learn in their native tongue before long. Furthermore, exposure to computer technology is likely to be instrumental in helping developing countries establish themselves in the developed world.
So maybe open courseware isn’t ready to completely replace brick-and-mortar institutions, but in the coming decades, it will certainly play an important role in equalizing access to education for people across the globe.
Yale professors are asking the administration to conduct online seminars or sections in the name of spreading knowledge. If Yale College Dean Mary Miller moves forward with the recommended program, it would be a slap in the face to Yale students and a surprisingly hasty move given how Yale has dragged its feet in hosting MOOCs.
Imagine yourself in a Yale seminar sitting next to a student from Quinnipiac, and you’re both getting the same credit for the course. The benefits? That QPac student is probably part of the grading curve. The downsides? Everything else. I’m all for spreading the knowledge around, but can’t we do this without degrading the Yale student’s experience? MOOCs are perfectly fine for that – I can’t see or hear you on the other side of the screen, so I don’t care if you watch the video of my lecture.
Sure, maybe the application process selects only outside students who would contribute to the course (and who are willing to pay the exorbitant price). But the Yale experience is still compromised when an outside student is able to take the place of a Yale student, even one who would not contribute as much in the classroom. Typically, the best seminars or sections are all filled up by lucky Yale students who were able to get in during shopping period, either through the course lottery or application. So if Yale students now had to compete with outside students to enroll in a course, then we will have lost a very fundamental right as Yale College students.
I doubt that accepting outside students will in itself bring novel perspectives to the seminar that couldn’t be found by bringing another Yale student in instead, since so many of Yale’s own students have a broad range of experiences and come from a variety of backgrounds and countries. Even though previous online courses have included outside students, the courses were held over the summer, where they wouldn’t need to compete for spots with Yale students and where taking courses is not a right for Yalies, but a privilege that must be paid for (and costs $3000, to be precise), and where professors were also free from their responsibilities during the school year. This issue is less one of being open-minded and willing to welcome outside students into our classrooms as it is about preserving the benefits of being a Yale student. We can only share the Yale education insofar as it does not infringe on Yalies’ right to attend the classes of their choice and benefit from the Yale education – particularly considering how difficult it currently is to get into many good seminars.
As far as using the online medium itself for seminars and discussions, I’m unsure that it does anything to enhance the experience. While it no longer allows students to sit quietly in the back of the classroom, it now allows students to surf the web or their email during class. It is also harder for students to directly engage each other in discussion via the bizarre combination of instant messaging and video chat, and it may also make it more difficult for the professor to get to know the students in the class.
I understand some forward-looking professors may want to experiment with the online medium, and having the online option for summer classes alone is useful since students who are at home or studying abroad can continue to take classes, giving them more flexibility rather than being forced to stay in New Haven for Yale Summer Session. But during the school year, it simply doesn’t make sense to have the entire class held online unless it actually improves the class experience, which hardly seems to be the case.
Online-only courses have great potential to improve introductory math and science lectures, where a great professor’s online lecture would do much better than a graduate student’s poorly delivered lectures in person; where skills are adopted by students at widely different rates; and where most students merely take the course as a prerequisite, or to develop foundational technique and knowledge. But unlike lectures, seminars are based significantly on the interactions among the students, and thus there is so much more that is lost when a seminar is moved online versus a lecture. And after all, seminars are often the classes that come to mind when Yalies think of their best classes – not lectures (and for those Yalies whose favorite class is a lecture, they probably never took a class that wasn’t a prerequisite, and/or the professor should probably be teaching a seminar instead).
Dean Miller, keep seminars offline, and at least keep outsiders out of our seminars.
As is made clear by the readings for today – particularly the New York Times article – the internet is drastically altering education in two ways. Firstly, the internet has provided an incredible technical tool for universities and other educators to make their courses widely available. Secondly, the internet’s ethos – free, easy access to uncensored information – has infected a large swath of educators, encouraging them to rethink the way in which our society trains and educates its youth. Let’s take these two aspects in order.
It’s not difficult to see the enormous advantages that an internet classroom can have over a physical one. For starters, the internet removes almost all logistical constraints – no need for a classroom that can fit all the students, and no need for a commonly scheduled time that works for every student, for instance. Whereas before classes had to be held in specific locations at specific times, the internet allows students to take in their lectures at any time, in any place. Now you can go to class in sweats (oh wait, that’s nothing new). It’s easy to see why university administrators – and students who aren’t fond of waking up for early classes – are thrilled by the prospect of technology that allows you to watch lectures when students are awake, alert, and actually interested (if you’re watching the lecture voluntarily, you probably won’t be checking facebook the whole time).
Although the technical details of the internet explain why making virtual classrooms has such allure, it is really the change in attitude about access to knowledge and education that is driving the proliferation of online learning and Massive Open Online Courses (MOOCs). I believe that this evolving attitude comes directly from ideas about the internet being, at heart, open, cheap, uncensored, egalitarian, and accessible to all. Consider the mission of Wikipedia: to provide free, accurate information to everyone worldwide. Open Yale Courses and other similar initiatives all have in mind similar goals: to take what was previously an extremely expensive product available to a select few (like a complete Encyclopedia Britannica) and turn it into a free product that can be accessed by anyone, at any time, for free (like Wikipedia). To most, this is a cause for celebration; for a few, it is a cause of consternation, as they realize that:
Will it Work?
Before we get too excited about a future of all-online universities, let’s remember that we’re all still Yale students for now – we don’t yet know whether virtual education will become the norm. For now, the vast majority of those taking online courses (69%, in Yale’s case, according to the New York Times), are independent learners – adults who are taking the courses to exercise their minds or to explore an interest. The next generation of engineers is still being trained on college campuses, not online. I believe that there are two very important functions of an institution of higher learning that online educators have yet to successfully replicate. The first is the community of a university, and the attendant benefits of integration into that community; the second is an effective evaluation/grading system.
Consider the demands that we place on a university: between the time they matriculate and graduate, we expect students to transform from snotty high schoolers who know little about the world into wise, well-mannered and well-adjusted adults. Surely all the skills required for adult life aren’t taught in ECON 110 or ENGL 120 – or even CPSC 183! If we expect universities to be centers of socialization, can an all-online university ever be successful? I believe that we will have to change our expectations about the goals of a university education (to more technical and less social ones) if online education is ever to reach a wider audience than casual adult learners. Furthermore, we should consider the functions of a university campus – it is a hub of intellectual thought and research. If the campus becomes little more than a studio for the production of lectures to be put online, surely it will lose some of the vibrancy and academic exchange that we now associate with universities.
Grading and Accreditation
Beyond the normative argument that the current university system has important benefits, there is a practical argument to be made against online education: no one has yet discovered an effective and simple way to perform one of the most important functions of a university – giving its students a grade. Programs like Open Yale Courses do not give feedback to someone auditing a course, and besides courses in very technical subjects, MOOCs are not able to provide their students with certificates of completion or any other accreditation. Like it or not, educational institutions are more than just places for students to learn – they are sorting mechanisms for employers. A university degree tells employers that an individual is capable of performing at a high level (and the high price of college is related to the large spike in earnings that a college graduate can expect). Until MOOCs provide degrees or certificates of completion, they will not be acceptable substitutes for regular universities.
Where Do We Go From Here?
It is hard for anyone to say what the future holds for online education, but there is little doubt that the tools the internet provides will continue to be integrated into our current model of education. In fact, it is likely that our education system will be changed in significant – though hard to predict – ways by the arrival of this disruptive technology. In particular, it seems likely that online education can be a boon for those in developing countries and for employed adults who wish to continue learning. Given the poor alternatives currently available for these two groups, online education has the potential to be an extremely important tool in providing low-cost (but hopefully high quality) education to them. However, whether online education can drastically alter the core of our system of higher education is yet to be seen. At best, it will lower the cost of an elite education and ultimately raise levels of education and productivity across the workforce – but we won’t know until further along in our young experiment with online learning.
Since January this year, I’ve been involved with a Singapore-based educational startup called openlectures. A little like Khan Academy, we offer video lectures in several academic subjects, freely accessible to anyone who can view YouTube videos. This year we hit 1000 lectures filmed and are continuing to produce them at a steady rate. Unlike Khan Academy, our approach is specifically focused on complementing a country’s official school system, so the lectures available right now focus on some of the most common subjects Singaporean pre-university students study like economics, chemistry and math. In time, we hope to create things like SAT and AP prep materials for the American market, then material for the Abitur, and eventually conquer the world. Our founder and “strategy” (that’s just what he calls himself) both just started their first year at Columbia, where they’ve roped in an unsuspecting French freshman to help kick off openlectures USA efforts. The rest of the team includes people from Yale (cheer), Harvard (hiss) and several other top schools in the US and UK – and of course students back home in Singapore.
I was originally supposed to lecture on art, art history and art theory, but my legendary procrastination skills meant that by the time we went through the first complete overhaul of the lecture system (imaginatively named “OL 2.0″) I was still working on how to prioritize works in the school syllabus without falling into the trap of evaluating art in a vacuum. Those lecturing plans are now on indefinite hold (“nobody studies art anyway!”), and my current role in the organization is “Artistic Director/Cake”, where my main contribution is anal-retentive criticism on web design and user experience. Nevertheless, I’ve been with them for quite a bit, and thought I would use this blog post to give everyone an insider look into one online education initiative, especially “official” responses to the project.
There are already so many free learning resources online – why add openlectures to the mix?
Although knowledge is universal, openlectures was still created to address some educational problems specific to Singapore. Firstly, although Singapore’s education system grew out of the UK system, over the years changes in both countries have resulted in Singaporean-style education being quite different from what you find in open courseware from any other country.* For instance, the way microeconomics is taught at A-level (equivalent to grades 11-12) is very different from how it’s taught here at Yale and many other Intro to Microeconomics college courses on iTunes U. Singapore A-level economics looks at the big picture and almost always relies on general models instead of quantitative analysis, and examination responses are supposed to reflect this. openlectures takes that into account and you will be hard-pressed to find numerical examples in our economics lectures.
Secondly, the philosophy behind openlectures (similarly to the Access to Knowledge movement, we believe that education should be freely accessible to all) is very general-sounding, but was driven by a specific phenomenon: Singaporean schools cover a lot of material in not a lot of time, which is great for helping us skip introductory-level courses if we attend college in America, but also makes classes hard to keep up with for many. At the pre-university level, most information is disseminated through huge lectures to 700 students at a time for the more popular subjects, supplemented by a few hours of “tutorial” time per week that is closer to traditional 25-kids-in-a-room classroom teaching. Private tuition outside of school is seen by most Singaporean parents (and, sadly, students) as a necessity for keeping up with school – a shadow educational system exists next to the one run by the government, one that you can only access if you have the money to pay private tutors. We believe this reflects a shortcoming in the school system, and wanted to do something to help students. In the spirit of open access, openlectures’ terms are also based on the idea that “since we’re here to offer something for free, we’d like to do it with as little [sic] strings attached as possible”.
We do have a long-term goal more similar to Khan Academy of “anyone who wants to learn anything can come here”, but realistically, we’re more focused on test prep for now. But as our lecturers start taking more fun college courses and learning about things beyond what we did in high school, who knows? One of us might just decide to do a series on underwater basket-weaving in the summer.
Our efforts are mostly coordinated via Facebook. It is a little weird that I haven’t met many of the people I work with – or if I have, I don’t have any impression of them. The guys whose web designs I bash whenever I’m procrastinating on a paper? No clue what they look like (never bothered checking their Facebook albums).
Our coordinators put up announcements like “Session as usual come this Saturday.
Who’s coming?”, people respond, and magic happens in the small green room we film in. How we actually do the lectures is one area I can’t provide much insight into, unfortunately, because I’m not particularly involved with it. My contributions usually go along the lines of:
And since we are a bunch of teenagers after all, a fair amount (probably too much) of the material on the Facebook group looks like this:
Open course material on the Internet may be free, but getting it there definitely isn’t. The William and Flora Hewlett Foundation, the principal financial backer of the open educational movement, has spent more than $110 million over the past eight years, with more than $14 million going to M.I.T. The cost of re-creating the educational experience is high. Only 33 of the 1,975 courses posted by M.I.T. have videos of lectures. Another hundred or so contain multimedia material like simulations and animations. The rest is simply text: syllabuses, class notes, reading lists, problem sets, homework assignments.
Relying largely on money from Hewlett, Yale has spent $30,000 to $40,000 for each course it puts online. This includes the cost of the videographer, generating a transcript and providing what Diana E. E. Kleiner, who runs Open Yale Courses, calls “quality assurance.”
openlectures started producing our first complete courses with a budget of something around S$1000 – around US$820. We are literally a bunch of teenagers in a room with a camera, microphone, green dropcloth and computers. The whole project is run by volunteers; we have a team of over 100 people doing lecturing, admin, public relations, design etc. but no one has ever gotten paid. There’s money involved (grants! free money!), but it goes to buying filming supplies, our domain name, and coffee to keep the lecturers running. We’re also not picky about our setup. Until someone threw money at us, we filmed all our lectures in a tiny room near a busy street (I think we still use that room, actually; I haven’t been there in a while because, you know, studying overseas).
Zooming in carefully would give us videos like this:
And now that we have a green screen (basically a green bedsheet), we can do this! (This video also shows that 1. we’re aware that the Singaporean accent is kind of weird, and are working on subtitling all the lectures; 2. we try not to repeat school lectures and try to share strategies that have worked for us instead)
While we’ve previously tossed around ideas for improving the learning experience on our Facebook group, truth be told, openlectures is not revolutionary. The earliest openlectures videos used the traditional “guy standing in front of a whiteboard” model of remote education. Newer videos have replaced that whiteboard with a green screen onto which we can superimpose animated graphics, but it’s still a very traditional approach to lesson delivery, especially compared to things like Khan Academy’s computer science lessons which teach by actually getting you to write programs on the spot. (When I asked founder Linan Qiu why we used this model, he justified it with “you always want to see a person […] explain something to you […] seeing his passion/his gestures”.)
So openlectures isn’t that special – we just explain topics better than our school lectures do, put the videos on the internet to make them rewatchable, and do it for really, really little money. But the way the Singapore government has treated us suggests they view us as a serious competitor to the official education system.
Responses to openlectures from “traditional” education providers
Shortly after we started to get off the ground, openlectures began to attract interest from several parties. First there was the press…
And then came several private tuition companies who approached us about “partnerships”, i.e. running ads for the very industry we are opposed to. (We gave them a polite middle finger and they learned to stop bothering us.) But then we received word from Singapore’s Ministry of Education (MOE) – government attention! How nice! While I’m not too clear on the specifics, I know that the the Permanent Secretary (= big shot) of the MOE has tried meeting us a few times but (quoting Linan Qiu) “couldn’t get anything out of us”. I’m not too sure, but I believe the Minister for Education himself, one of the most important people in the country, has also asked for a meeting with our CEO this month.
I initially thought they were concerned about copyright and intellectual property issues. Most of our lecturers work based on notes from school, though of course we break them down and incorporate our own examples or external knowledge. But my counter-argument to this would be that we choose what to teach based on government syllabi, but what we teach itself is universal knowledge that cannot, and should not, be copyrighted. Also, educational generally gets a “fair use” free pass when it comes to copyright enforcement. And it seems the MOE isn’t really interested in that at all.
Speaking with the founder reveals he thinks the MOE is more interested in understanding how we work, and then probably taking over us. “They’ve been trying to do an online system for the past decade ever since the SARS crisis, but failed”, he told me, “and we came over and did it with a budget half the salary of their admin executive, i.e. around a thousand bucks. […] they tend to think that they have a claim over what you do simply because you’re a student. Oh and second thing is that they feel threatened and just wanted to make sure that we’re not trying to subvert the school system.”
“[…] they wanted us to be subsumed under them – they didn’t make it so explicit but they wanted to “fund” us, or give us support. And usually what MOE does is that from then on they start putting their own staff here. Oh and they wanted ot [sic] “supervise” what we are doing.” (edited for punctuation, because we were talking on Facebook Chat)
The government probably felt threatened because there has been much buzz about open and free education replacing traditional educational providers, but for now at openlectures we hope to complement traditional school-based education instead of replace it. Hence our lectures are all structured around helping students taking Singapore-style examinations. Still, that our education ministry thinks we’re doing a good enough job to feel threatened is high praise indeed.
A bunch of kids teaching other kids: what about quality?
This is a legitimate concern, and probably the most important. But students teaching other students has been around for a long time already: it’s called peer tutoring. The openlectures system is highly reliant on lecturers knowing their stuff, but based on our academic transcripts it’s generally assumed that we do.
Some quotes from the openlectures staff when I asked them about this problem:
Founder Linan Qiu: We don’t admit to have error free content all the time. But we do admit to our mistakes and refilm vigorously. Those whose videos have been trashed by the terabyte by Kenneth [CEO] and I will know this.
CEO Kenneth Lim: We do a lot of ground work before a lectures comes into place. There’s the syllabus outline which everyone works on, the scripts, the slides, the post-processing and the uploading/publishing. At every stage the person who’s doing it is keeping an eye out. Sometimes it’s a technical problem, sometimes it’s a content problem. If it’s a technical problem, we see whether we can do anything in post-production. More often than not we can, but if we can’t, then we refilm. [name of lecturer] is one such victim . Entire lesson refilmed because we couldn’t key her nicely. If it’s a content problem… since we introduced OL2 we’ve not had any content problems. We are awesome like that.
Most importantly, as students, we have an advantage in knowing what our audience wants. Most lecturers begin lecturing right after graduation, and quite a number lecture while still in school. Every one of us has fallen asleep in a useless lecture; every one of us has had the experience of frantically trying to make sense of confusing school notes the night before the big exam, so every one of us also knows what confuses other students the most, and what works in helping people understand the material.
Reception from the public
People seem to like what we’re doing. So far our website has had 41000 unique visitors with 350,000 page views, of which about half are returning visitors. The average visit duration is about 5 minutes, enough to view one or two lectures, and 30% of our visitors show browsing patterns indicating that they watch several videos a time – presumably they are working through a course. And then there are heartening YouTube comments from students – not just Singaporean students:
Although sometimes we do get annoyingly patronizing comments.
But yes. Overall, heartening comments.
And now because the deadline is approaching I am going to abruptly end this blog post: I hope this has been an interesting post from the other side of open education! Feel free to comment with any questions or criticism for me or the team.
* Open courseware is also dominated by providers from the US. Just saying.
I sit here in one of the world’s most hallowed institutions of higher learning, attempting (to no avail) at writing a inventive yet sensible blog post, one whose whit and charm will only force you to read it from beginning to end. But, alas, my efforts are to no avail. So, as anyone in my position would ask himself or herself, who is to blame? Is it the result of my education thus far, one that has lacked to instill in me any sense of creativity? Are years of rote memorization at fault? Would the use of an iPad or a computer have helped me in any way to avoid situations such as the one I find myself in right now? Wikipedia, where are you when I need you most?
In all seriousness, the education crisis that we see within our own country has prompted policy makers and schoolteachers alike to reexamine what it means to have a “western education.” More specifically, within the past decade, the prospects of incorporating technology into a traditional western education have become more and more appealing. If we expand our focus more globally, the use of technology would allow for regions that lack fundamental infrastructure and manpower to have the ability to successfully educate a population.
However, society presents us with a paradox, of sorts. On the one had, as Issac Asimov contends within his short story “Profession,” technology may accelerate the weaknesses of our current education regime, one that, some argue, promotes rote memorization and, ultimately, the attainment of a job, rather than developing the ability to think critically. In essence, he illustrates the appliancization of the human with an education regime which not only governs whether or not you will “be educated” with pre-determined programing, but also prescribes your line of work, ultimately robbing all but a small fraction of its population the ability to think freely and to learn. Yet, others contend that the current “text book” status quo, cannot continue. Consequently, in examining the traditional methods through which a “Western education” is administered and how these methods will ultimately intersect with technology, (both in the sense of actual hardware and accompanying networks, like the internet) we are forced to ask ourselves, as Asimov contends, what the fundamental purpose of “getting an education” should be.
The introduction and development of programs, such as Khan Academy, providing lessons on topics ranging from chemistry to philosophy all housed within cyberspace, have, in part, restored what Asimov sees as the ultimate goal of education: a life long pursuit of knowledge for the sake of learning that will allow for innovation rather than appliancization. To this end, these programs help alleviate the problem of exclusivity not just within our own country’s education system, but within a broader global context.
Moreover, it is the “generatvity” of technology itself that may ultimately secure its foothold as far as modern education is concerned. Take for example, the One Laptop Per Child (OLPC) project beginning in 2005. Aiming to give one hundred million hardy, portable computers to children in the developing world, the goal of the project is “to create an infrastructure that is both simple and generative,” allowing children the ability to think critically within a community setting while “fixing most major substantive problems only as they arise, rather than anticipating them from the start.” In conjunction with such developments, we see that access to knowledge and science has become “protected by Article 27 of the Universal Declaration of Human Rights . . . [balancing] the right of access with a right to protection of moral and material interests.”
Yet, we would be remiss to assume that technology will solve all of our problems. While the number of open courses at universities like Yale or MIT are growing and the development integrated teaching platforms online help provide equal access, they do not completely address the underlying socio-economic and cultural barriers that we are faced with in reality. Furthermore, there is no point in having these technologies available for little to know cost if those who need it most will not have the infrastructure (i.e. computers) to make use of such opportunities. There is only so much projects and organizations can do.
As for right now, and I’m sure for decades to come, a college degree will continue to be a signal to employers and society “that you’ve passed a certain bar.” Time will only tell if we can develop as Neeru Paharia, founder of Peer 2 Peer University which allows users to set up or participate in online classes, puts it, “alternative signals that indicate to potential employers than an individual is a good thinker and has the skills he or she claims to have—maybe a written report or an online portfolio.” It is clear, however, that we are experiencing the beginnings of “ unbundling,” where the four elements of education—design of a course, delivery of that course, delivery of credit and delivery of a degree—will no longer be housed under “the same institutional setting,” as suggested by Roger C. Schonfeld, research manager at Ithaka S+R, a nonprofit service that helps academic institutions use technology for research and teaching. Ultimately, as technology and education walk hand and hand into the future, a balance of sorts must be found, preventing our society from heading down the same path that Asimov so strongly heads against.
Freedom. It’s a word that stirs deep feelings in the heart of any American, as much as the words of Abraham Lincoln’s Gettysburg Address or Ronald Reagan’s inaugural speech. So when I tell you that your freedom is threatened by your attitude about computers, you will no doubt listen carefully.
What is Appliancization?
It’s actually not me but Jonathan Zittrain who’s out to convince you that “appliancization,” the transformation of personal computers from flexible generative platforms to locked-down appliances, threatens your freedom—or perhaps more accurately, the freedom of technological progress. Appliancization is particularly apparent in Apple products. The typical iPhone can only run software from the App Store, and the App Store can only contain apps that confirm to Apple’s nebulous whim. This means that if you buy an iPhone, you are buying not a completely customizable platform, but an appliance whose functionality may be limited by Apple’s judgment. With their Mac App Store, Apple threatens to do to the personal computer what they have done to the smartphone.
Why do we allow this ostensible appliancization to happen? According to Zittrain, we trust Apple-approved products because we value our computers’ safety above all else: “Viruses, spam, identity theft, crashes: all of these were the consequences of a certain freedom built into the generative PC. As these problems grow worse, for many the promise of security is enough reason to give up that freedom.” Whether or not Zittrain is intentionally paraphrasing Ben Franklin here, the way out of our appliancization conundrum is clear: we must show that, as consumers, we value generative freedom over security. The first step is to, um, give up your Apple products.
Maybe Apple Isn’t Evil, Yet
Let’s be honest, the technical word you just learned about isn’t quite enough to convince you to relinquish your Apple fetish. In fact, I’m pretty sure you don’t care that much about generative freedom, as long as you get to keep playing Angry Birds and taking artsy Instagram photos. Zittrain might give consumers too much credit when he claims that people use apps out of an informed desire to avoid bad code. What really drives us to apps is not security but convenience.
I’m not ashamed to admit that we’re all sheeple, guided by our desire to get the products we want with the minimum amount of effort. Apple has made thousands selling a smaller version of a bigger version of a 5-year old product. Now it is profiting off of, quite simply, a convenient market in which to acquire software. What would really drive people into the App Store isn’t, as Zittrain claims, a network security crisis, but some crisis where Google breaks and it becomes even more inconvenient to search the web for software.
Moreover, I would hesitate to call Apple’s actions outright “appliancization.” Sure, Apple might push its own apps and occasionally act irresponsibly as a gatekeeper, but its products are far from appliances. The iPhone is a platform, albeit a shiny, somewhat limited platform: apps can serve a vast variety of different innovative functions.
A Widespread Problem
It’s not just Apple that holds us under its corporate thumb, either. Zittrain claims that the rise of “Web 2.0,” the increasing use of browsers to do just about anything, also threatens technological innovation. Do you remember the days when you used an email client instead of Gmail, an instant messaging client instead of Facebook chat, and your computer’s built-in calculator instead of Wolfram Alpha? Or are you too busy listening to Pandora and browsing Tumblr to care? This very WordPress site demonstrates the ubiquity of Web 2.0, from its reliance on user-generated content to that weird gray sidebar on the right that doesn’t seem to serve any purpose.
Despite being a PC user who considers himself completely above all of those unthinking short-sighted mac users out there, I spend 95 percent of the time on a word processor or a web browser. I am completely dependent on the structure of the internet and the integrity of my web browser when it comes to most of what I do on a computer.
Sheeple Aren’t That Dumb
Web 2.0 presents a structural vulnerability in our network that is not immediately apparent, but Apple presents a more manageable problem. If you believe my claim about sheeple consumers, in the end, it’s not norms or laws, but the market that will decide our future. Zittrain can’t stop people from buying Apple products, but Apple can. As soon as the App store starts infringing on our convenience, customers will take notice. And if the infringement grows to the extent that it outweighs the convenience of using the App Store, a separate market will emerge to satisfy consumers. If Apple decides to use its control over its hardware to block such a market, even fewer people would buy Macs. The average consumer might be dumb, but he’s not that dumb.
Developers, too, still have a stake in the survival of the App Store. They no longer have to process transactions or track licenses on their own. As soon as a critical mass of developers see the App Store as a detriment to their work, an alternative will emerge. If Apple blocks an alternative from emerging, they will lose the developers and the generative capacity they offer.
As we witness the development of Apple-style appliancization, we should also note the benefits that come with it. First, people unfamiliar with technology find app-based computers easier to work with, and those people matter too. Second, Apple doesn’t dictate the future of the PC; there remains a significant portion of the population that prefers the more open, non-Apple personal computer. We’ll sit back, grab some popcorn, and wait for what Apple does next.
In a move that seemed almost too ironic to be true, Amazon in 2009 remotely erased George Orwell’s 1984 from thousands of Kindle devices. The web giant’s spokesperson explained the book had to be removed because it was added to the store by a company that didn’t hold the rights to it. Amazon became a sort of Big Brother in the real world even as it erased the character from the digital one. This move and others like it point to a problem that could only increase in scale as we enter into a world where technology occupies a space at the very center of our lives.
People are now moving towards using more centrally controlled, or “tethered,” information devices like smartphones and e-readers in order to increase security and ease of use. But there’s a tradeoff here. The more tethered to the network our devices become, the easier it is for institutions to regulate them and the harder it is for users to tinker with them. Our devices are becoming appliances that, as Internet expert Jonathan Zittrain puts it, “can [only] be updated by their makers and [are] fundamentally changing the way in which we experience our technologies. Appliances become . . . rented instead of owned.” This is the kind of change that allows you to own 1984 one day and see it vanish the next. Companies simply have more control over the products and content they sell you because they can modify and monitor them from afar without your consent.
Where We’re Headed
We’ve become fairly accustomed to tethered appliancization on our smartphones and laptop screens. Soon, this model will make the jump to new kinds of “wearable computing” devices. Earlier this year, Google announced Project Glass – augmented reality glasses designed to overlay contextually relevant information atop the real world. The idea is to get technology out of the way and make it easier to share moments from your life in real-time.
If Google has its way, you might use these glasses for all sorts of things. If you’re a car mechanic, you could wear Glass and have information about what exactly you need to fix displayed right in front of your eyes. If you’re a doctor, maybe you’d use these during surgery to get easy access to important vital statistics without taking your eyes off the patient. Project Glass could also allow you to easily navigate a new city or give constantly up-to-date information during a natural disaster. No matter the use case, this technology would arguably more integrated into daily life than anything that came before it and regulation could take on a whole new level of eeriness. Tethered technologies like Project Glass make surveillance inexpensive and practical for regulators. If the government wiretapping our mobile phones, what’s to keep them from doing the same with high-tech glasses? Surveillance would become less like Big Brother looking down on you and more like a first person-shooter video game. It’s not too hard to imagine that Project Glass might one day become ubiquitous. A government would only need to regulate Google in order to change the way that hundreds of thousands, if not millions of people, experience the world. While the U.S. might not be willing to go to such lengths, it’s not inconceivable to think that Chinese or Iranian governments would.
Famed Silicon Valley venture capitalist Marc Andreessen likes to claim that software is “eating the world.” If that is indeed true, its next meal could very well be the auto industry. Google and others are working on a self-driving car that has already been approved in several places, including California. An autonomous car would be a kind of PC in its own right and would take the idea of “tethered appliance” to a whole new level. Car companies could micromanage our driving behavior in the same way that media companies dictate the use of our DRM-protected mp3s and ebooks. Self-driving cars could be forced to self-report to local police upon breaking the speed limit and tyrannical governments could even remotely disable cars or set curfews on their use in order to prevent people from mobilizing. Given that all these “mobile devices” would be connected to the network, the barriers to surveillance would be minimal. All kinds of new questions would arise: who, for example, would be responsible for car crashes? What new, unfair practices could car insurance companies think up? Major policy changes could be implemented as “minor technical adjustments” to code or technology in the car. We would sacrifice full ownership of our cars in the same way we’ve done with PC software and increase regulability for the sake of security (a tradeoff that, as we’ve seen in the PC world, is not always a beneficial one).
Let’s also consider the emerging practice of biohacking – a field that fearless teenagers and experienced doctors alike have shown interest in. To biohackers, computers are hardware, apps are software, and humans are wetware. “Gone are Microsoft’s windows into the digital world, replaced by a union of man and machine,” they say. These hackers see humans as the next frontier for technology and believe cyborgs will eventually become the norm in society. They implant chips into their bodies in order to connect to the network and experience novel things like electromagnetic fields and cybernetic telepathy. If this were to happen on a grand scale, humans would ultimately become tethered appliances themselves. Self-driving cars and wearable computers are layers built on top of the human experience. Biohacking brings technology, and thus regulability, to a far deeper level and forces us to rethink the idea of what it even means to be human. In a scenario where cyborgs do indeed become the norm, governments and companies would be able to regulate our very existence in the same way that they today regulate software, cars, and digital content.
I’m painting a very bleak picture here of a dystopia that certainly won’t come around tomorrow or even in the next few decades. It is, however, important to take the long-view and entertain seemingly improbable ideas – especially in an industry that’s moving so blindingly fast.
A Glimmer of Hope
With its new Kinect, Microsoft seems to be taking a different approach that may help us avoid the freakish future outlined above. While it can certainly help you master your Lady Gaga dance moves or improve your tennis skills, what’s most interesting is what the Kinect can do when it’s not chained to an Xbox in your living room. Developers have gotten their hands on the Kinect and used its advanced sensors and imaging technology to implement all sorts of creative hacks. Microsoft has even endorsed this practice; it set up a $20,000 fund to help companies interested in toying with the Kinect and came out with an ad promoting such innovation earlier this year.
From controlling robots, to enabling virtual fitting rooms, to helping blind people walk, the potential of the Kinect seems boundless. Though the appliance is “tethered,” it allows for a huge amount of generativity. In other words, independent users can tweak it to their own liking and come up with new, inventive ways to use it. Microsoft set an example here that the entire industry should follow. This kind of “hackability” leads to more innovation and will offset the costs of increased regulability that these tethered appliances often bring.
“Imagine the ways we’ll seem backwards to future generations”
Startup expert Paul Graham likes to think up new startup ideas by imagining the ways we’ll seem backwards to future generations. We, it seems, would certainly seem backwards if we abandoned tethered appliances altogether for fear of regulation. The benefits of connected devices are far too numerous to count. In order to maximize their potential, technology companies need to follow Microsoft’s lead and keep platforms open enough for innovation and secure enough to combat malware. Apple doesn’t necessarily have to turn their App Store into the Wild West, but their policies should at least be more transparent. Why, for instance, can’t this developer get his drone strike-tracking app approved? Governments need to meet these tech companies halfway and enact policies that facilitate innovation rather than hamper it. As tech companies grow increasingly powerful, they can serve as checks on the power of abusive governments, but only if they allow users to do a little of the hacking themselves.
Many of this week’s readings express concern about the degree of control Apple possesses over the development and distribution of apps for iOS platforms. Some articles, like Business Insider’s “Apple’s 10 Dumbest iPhone App Rejections” are characterized by a sense of humorous frustration at what seem like arbitrary or misguided reasons for disapproving apps. At times, these rejections are more than just a simple oversight. As a 2009 investigation by the Federal Communications Commission hints at, Apple perhaps had ulterior economic motives for rejecting Google Voice as an app.
But in this post I would like to turn to some more recent tensions between Google and Apple. Firstly, there is the case in which the Youtube app dropped off the homescreen of the iPhone and had to be resubmitted. In some sense, this wasn’t such a tragedy. The modified Youtube app was able to expand its selected offerings by tens of thousands of videos once it freed itself from the restrictive advertising policy Apple had hitherto subjected it to. (http://bits.blogs.nytimes.com/2012/09/11/losing-its-place-on-the-iphone-youtube-introduces-a-new-iphone-app/) But nevertheless, it demonstrates a growing realization on the part of Apple that it can control the way applications are distributed on its platforms.
The second development I would like to bring up concerns the ongoing competition between Google Maps and Apple’s own map application. As most of you know, Apple has long ago switched over from Google Maps on the iPhone to its own markedly inferior app, and even today Google periodically voices complaints on the subject. In an article appearing this past November in the Guardian, Google has expressed concerns that when they complete their Google Maps app for the iPhone it will be rejected by Apple. (http://www.guardian.co.uk/technology/2012/nov/05/google-maps-doubt-iphone) Sometimes, it is hard to know how seriously to take these comments. But while this smacks of sensationalism, Google points out that Apple does not currently include any mapping App that uses the Google Maps’ APIs, even when it comes to well known apps like Maps+. Even in this week’s readings, it was shown that Google Latitude has similarly encountered rejection by Apple. So the situation already seems to be rather problematic.
Still, in most cases Apple has the right to exclude or the control apps it sells even though it might seem unfair. Apple is making it increasingly less convenient to use non-Apple software, but even then, one must remember that app users and developers still have the option of turning to other channels of obtaining and distributing apps. Although I sympathize with some of the concerns discussed in this week’s readings, for the time being, Google will just have to be content with complaining.
No, probably not. But that’s not what some people think. Ever since it was the little guy fighting big bad Microsoft, Apple has been known as the more controlled, more protective alternative. But now that Apple is a giant in its own right, that tight-to-the-vest nature can seem a bit… possessive. Critics call the iPhone App Store a walled garden where Apple rules all and generally has too much power, while Apple itself calls the store a “curated platform,” where apps are assured to be quality-controlled and safe. No matter which side of the debate you’re on, everyone can agree that as it stands Apple is the gatekeeper to a huge amount of content, and, more importantly, brand-obsessed Apple junkies. So who’s right, Apple or critics? Maybe they’re both right. Better investigate.
From the point of view of software developers, it’s unclear whether Apple’s walled garden, including the Mac App Store, is a boon or a curse. Yes, when a developer sells their product on the Mac or iPhone store Apple takes a sizable junk of 30% of the profits simply for the privilege of being there. But perhaps more important than that 30% is the increased exposure a product gets when it is a part of the App Store, as well as the implicit Apple seal of approval. Instead of having to advertise independently and seek out ever-so-elusive customers themselves, developers in the App Store have access to an already-established audience, and one that immediately has trust in the product simply by virtue of it being in the App Store. Though some critics say that this will reduce the presence of software bundles and the amount of information a developer has on their customer, or will destroy developer independence, it doesn’t seem like many developers have rejected the App Store in favor of independence. Sure, some might object to the platform, but that certainly didn’t stop Apple from having 650,000 apps on its store as of July 24th, 2012. And the $5.5 billion Apple has paid to the developers of those apps is certainly nothing to sneeze at either. For many, it seems, it’s better to be in the club than outside looking in.
From the viewpoint of non-tech savvy consumers – and I include myself in this group, the judgment on Apple is a no-brainer. We don’t want independence; we wouldn’t know what to do with it. Rather than having hundreds of various email apps, all which differ in small, insignificant ways to me, the non-savvy, I’d rather just have a choice between two or three. While Apple may be criticized for some of its sillier rejections, on the whole, I would really prefer not to have access to a Baby Shaker app. While the Android App Store may have the smaller list of rules that the more independent-minded might crave, for those who would simply be more confused by more choice, the Apple App Store is the way to go. The only downside for those who don’t know any better is the long-term potential for less innovation. Developers making apps for a marketplace run solely in Apple’s garden are necessarily going to be limited by some restrictions, although not, I think, to the extent that doomsayers might suggest. Steve Jobs once said that “95% of all apps submitted are approved within five days.” Apple might be curating, but not that much. Besides, in this era, where the tech scene is ruled by a handful of giant companies, is dramatic innovation, rather than incremental change, really even possible? And, if dramatic innovation doesn’t seem feasible, couldn’t it be sacrificed for better quality products?
Really, the debate over the controlling nature of Apple comes down to two values: quality and freedom (with a little security thrown in) Some developers and critics may want freedom from Apple’s restrictions, but the marketplace has shown that the consumer doesn’t. As long as the majority of consumers do not realize what functions they are missing by only playing in Apple’s backyard, quality will win out over freedom for many. The rapid devotion some people have for Apple products will blind them to the alternatives. As much as Apple has been criticized for being a Big Brother figure, as long as it turns out pretty, functional products, that criticism will be confined to small minority.
Shakespeare was well ahead of his time—yet again—when he wisely said that “All the world’s a stage.” Did he foretell private government secrets being broadcasted on Wikileaks and Kim Kardashian’s personal thoughts on Israel being splashed on Twitter feeds to cause uproar on our contemporary international stage? The Internet has made everyone and everything so connected and shared that one action or one thought can spread like wildfire on the web. Take Sophia Grace’s rendition of “Super Bass” as an example, and her two-week meteoric rise to stardom when she became the darling child of the new Oprah, Ms. Ellen DeGeneres herself.
She looks happy, to say the least. But not everyone can be as talented, cute, and frankly, lucky, as this young British girl. Well, maybe that is, except her cousin, Rosie, the hype girl, who’s also enjoying meeting the biggest stars and exploring Disneyland on Ellen’s dime.
The point I’m trying to make here is that the Internet does not just create stars instantaneously—it also creates monsters. People become envious, defamatory, bad-mouthed individuals when cloaked with anonymity on the world wide web. They begin to say and do things they would never do in front of another human being. It could be something as small as an insult in a Youtube video comment on Sophia Grace’s 6,798th visit to Ellen, to something as significant as an entire blog dedicated to blaspheme an ordinary person with big dreams (see Civilizing The Internet where a woman dedicates her blog to slander an aspiring model). In the latter article, Rosemary Port, the defamatory blogger, and her lawyer try to quash a subpoena seeking to expose her identity by claiming she was protected under free speech, asserting that her words were akin to Hamilton and Madison’s Federalist Papers. Yeah right. The judge, of course, did not buy it, and Port’s identity was exposed after Google released her information. The model’s lawsuit against her was dropped and Port got cyberSLAPPed, that is, her internet anonymity was taken from her, and the world saw her for the monster she was.
But the question of anonymity and defamation need not apply only to obsessed stalkers. It can also apply to you, yes you, who may write a short reply on a small forum thread. Anonymity on the internet has bred a slew of defamatory gossip in sites like juicycampus (RIP), formspring, and even the Ivy League equivalent, Ivygate. When alone, posters write spiteful insults and divulge the most private details of normal people on a forum for the whole world to see. Do you think the blurry posts in the forums below really count as protected “free speech”?
These trash talkers though are empowered only through their hidden identity—they write to hurt others under the belief that they are truly hidden and anonymous. They claim their right to free speech and latch onto the fourth amendment to prevent any type of search and seizure that seeks to expose their true identity.
The conundrum here is that on one end, our culture has grown to thrive on juicy gossip tidbits because it gives us a glimpse of someone else’s life that we may not be exposed to in our daily routine. It makes us interested, it motivates us to read the “sluttiest girl” thread and maybe we even up-vote it, because we learned something private, scandalous, and all of that makes us feel better about ourselves. We can comfort ourselves feeling that our own privacy and secrets are still safe, while our peers find themselves under the bright lights of the world’s stage. Feelings quickly turn though, when the stage flips us on and rotates to expose and defame us. We begin to take the insults personally and feel the world closing in on us when our reputation we took so long to build is chipped away by an anonymous comment (The Future of Reputation discusses this issue in great detail.)
The solution to this problem of loving gossip and avoiding gossip of oneself comes down to two possibilities: one, we can continue to fight gossip by publishing more gossip of others for vengeance. After all, our secrets would not be so bad if everyone else’s secret was out there and maybe even ours would pale in comparison to the next juicy tidbit. Eventually, though, this approach would snowball into tons of gossip pages that would hurt so many people and cause significant damage to unstable people. Instead, we can take the second approach: as the model Liskula Cohen did, we can cyberSLAPP defamatory posters to show that Internet anonymity is a privilege and not a right. It is a privilege that should be used to protect journalists and their confidential sources like those here; this privilege should not be used to protect blasphemous and envious individuals who spit hate on people who are not pubic figures and do no warrant such malicious words. Let us reframe our thinking about anonymous Internet usage to encourage users to become more mindful of the content they broadcast to the world. For this to happen, the legal framework must adjust to the technology and allow for expeditious cyberSLAPP process where people do not have to go through the lengthy process of filing lawsuits to expose the monsters of the Internet.