Wednesday, 30 December 2015

"Computer Behind Pixar": Teaching Computational Thinking for 3D Modeling and Representation

How do you teach a group of middle- or high-schoolers about computer graphics without setting them down in a computer lab or showing them code? How do you teach them about 3D geometry without writing down a single mathematical formula on the board? And how, without doing all these things, can you nevertheless equip them with the vocabulary and intuition to be able to discuss and understand concepts in computer graphics, geometry, and representation?
Pinart toy: what better way to explain height fields?

That was our goal, and our chosen plan of attack was to flood the senses: let our students touch and explore physical models, work through group activities, watch video clips, participate in class discussions, and see demos. We filled our classroom with 3D printed models of various materials, faceted animal shapes, wooded mannequins, pin art boards, crayons and fuzzy pipe cleaners. 
What do all these objects have in common?

These physical models serve as examples and applications of different representational choices, including voxel grids, meshes, and height fields. Having physical examples to point to and explore can launch a discussion of different representational (3D modeling) choices.

Splash 2015 @ MIT

On November 22, 2015, Hijung Valentina Shin, Adriana Shulz, and I taught a 2-hour high-school class as part of MIT's yearly Splash! program - a Fall weekend during which thousands of high-schoolers flood hundreds of MIT's classrooms to be taught anything and everything. 

In our Splash! classroom, we sought to ask and answer the following questions: How is an animated character created? How can we represent different types of 3D structures? What kind of modeling decisions are made for a special effects film? What techniques do anthropological reconstruction, 3D printing, and game design have in common?

Importantly, we believed that these questions could be answered on an intuitive level, with no mathematical prerequisites. What better way to motivate the study of the mathematical and computational sciences than to give students a faint whiff of the awesome things they would be able to accomplish and think about in greater depth if armed with the right tools? 

Computational thinking to the rescue!

Here I will briefly outline the structure of our 2-hour class and the decisions made along the way, to provide possible inspiration for similar classroom activities and lessons. For the benefit of others, we have made all our slides available online.

Coding without coding

Target shape that one student described to the other using only
a set of provided primitives: colored squares, line segments, or
polygonal shapes.
Our ice-breaker activity first introduced the concepts of representational primitives and algorithmic decisions. Students split up into pairs, armed with grids and sketching utensils (colored crayons or pencils). One student was given a target shape, a set of primitives, and instructions. The goal was to supply one's partner with a sufficient and clear recipe to reproduce the target shape as accurately as possible. Some students could only specify one grid cell at a time with coordinates and a target color. Another set of instructions armed students with a ruler and the ability to specify starting and ending coordinates of line segments. A third group of students had polygonal shape rulers – e.g. triangles, squares, circles. Students could tell their partners to center a shape at specific coordinates.
Polygonal primitives
(ordered on Amazon)

Overall, we gave different student pairs different primitives:
  • pixels (colored squares)
  • line segments
  • polygonal shapes
We gave all students the same amount of time to complete this activity in pairs (15 minutes), after which students showed off their creations to their partners and other students in the class. These creations were hung around the classroom at the amusement of the students.

This gave us a great launching pad for discussion about the trade-offs between representational accuracy and algorithmic efficiency. We asked students: What did you find easy and hard? Were there parts of the shape that were well represented by your primitives? Could everything be represented by the primitives? What took you the longest? How many individual primitives did you end up using?

This kind of activity (or variants of it) is a good intro to programming activity, as students have to think about formalizing clear step-by-step instructions for their partner to carry out. The full instructions and templates for our activity are included here.


Computer behind Pixar

Inspired by the recent hype around Pixar* and particularly Boston Museum of Science's temporary Pixar exhibit, we called our class "Computer behind Pixar". The common goal of the exhibit and other educational media about Pixar is to hook in the general public with the beloved animations and characters for the purpose of introducing and motivating the underlying mathematical and scientific concepts. In fact, Mike from Monsters Inc. served as a repeating element throughout our activities, though we branched beyond Pixar, and beyond animation more generally. 





* Reference links on the topic of math behind Pixar:



We described and showed a video about the rendering pipeline*, and drew attention to the importance of modeling at the core of this pipeline, as the initial step that all future steps crucially depend on. We defined modeling as a mathematical representation composed of primitives
The rest of our discussion centered around different representational choices and their properties.





* More rendering resources:
Video about rendering in Pixar
Article about rendering in "Inside Out"
Character rendering (dark knight)
Rendering pipeline summary




Tangible examples of 3D representations


3D printed models are a tangible
demonstration of discretization and
the resolution issue.

Voxel grids

We introduced the concept of discretization, necessary for the representation of shapes in digital computers: 2D shapes as pixels and 3D shapes as voxels. We reminded students of the ice-breaker activity where grid cells were used as primitives.   
We then discussed voxel grids as one form of representation for 3D objects, commonly used for 3D printing. We talked about the resolution issue: the trade-off between accuracy and efficiency. We passed around physical 3D printed models at various resolutions, similar to the models pictures on the right.

Physical models to demonstrate the
differences between volumetric and
boundary representations. One is much
lighter! Why? It requires less material
to represent (and store).

Triangular meshes

In talking about efficiency, we introduced the notion of boundary representations, specifically meshes, for representing 3D objects without having to represent and explicitly store all the internal voxels (the volume). 
We connected the boundary representation to the ice-breaker activity, where in 2D, line segments were used to represent the target shape's boundary. We then showed students a demo of MeshLab, and passed around physical examples of volumetric and boundary representations.

CSG


We moved on to discuss how simple shapes can be combined with different operations to create more complex shapes, in 3D via constructive solid geometry (CSG). We reminded students that the ice-breaker activity also contained polygonal primitives in 2D. For 3D, we showed students a demo of OpenScad and discussed primitive operations (union, intersection, difference, ...) that can be performed on shapes. Applications in manufacturing were discussed. 

Height Fields

Heigh fields were introduced with the help of pin art boards, as pictured at the beginning of this article. Students played with the pin boards and considered again the concepts of discretization and the representation issue. We asked students: which kind of shapes or surfaces can be represented this way and which can not?

Procedural Modeling

The grass in Pixar's Brave was created with procedural modeling,
using parametric curves and randomness.
A great hands-on demo of this kind of modeling can be found on:
 Khan Academy's Pixar-in-a-Box.
We discussed how shapes could be created by specifying procedures on primitives (aside from the primitive operations in CSG). We showed demos of solids of revolution (what better way to motivate the concept that for most students appears for the first time only in college calculus?).   We discussed how procedures like revolution and extrusion can be performed along different paths to create all sorts of complex shapes. We discussed how these paths can be further parametrized so that the revolution or extrusion procedure changes along the path. We introduced randomness as another concept that can be used to add variability to the representation.
We discussed applications to modeling trees, forests, grassy fields, crowds, and cities.

3D Representation        Primitives                Operations (recipe)                   
Voxel grids                   Voxels                     Material specification for each voxel
Triangle mesh               Triangles                 List of triangles with locations
CSG                              Basic shapes            CSG operations (union, intersection, etc.)
Height field                  Points with height    Assignment of heights to points
Procedural model         Basic shapes            Procedure (e.g. extrusion along path)


A new way to look at things

With our class, we hoped to give students a look at the modeling decisions that underly all the animated films, video games, and special effects they see on a daily basis. We wrapped up our class with a thought exercise, putting students in the position of making decisions about how to model different objects. We told them to think about the different representations we discussed: the primitives and operations required. We told them to consider the trade-off between accuracy and efficiency. Given a representation, we also told them to think about its usability - what kind of use cases are being considered, e.g. whether the modeled object needs to be animated and how. Students were asked to brainstorm how they would model the following objects: buildings, cities, fabric, hair, grass, water. Along the way, we showed them image and video demos (all these links can be found in the slides). We passed around more physical models. Together, we watched a video "behind special effects" that showcased the kinds of 3D models used in movies, a great visual review of the many representations covered in our class. We told students to look around and realize that 3D modeling decisions underlie many other applications: special effects in films, video games, simulations, anthropological reconstructions, product design, urban planning, robotics, and 3D printing. To be reminded that they have been armed with a new way to look at things, students took home polygonal stickers.



Monday, 24 August 2015

Hyperconnectedness leads to hyperactivity

Although the term "hyperconnected" already exists, I will use the following user-centric definition: the embedding of an individual within the internet - the individual's omnipresence on the web. People have all sorts of outlets for posting, storing, and sharing all sorts of content: for example, you can post your photos on Facebook, Google Photos, Instagram, Flikr, Snapchat, etc.; you can blog on Blogger, Wordpress, Tumblr, etc.; you can write about articles, news, your day and your thoughts on Twitter, Facebook, Google+, etc.; you can exchange information on Quora, Reddit, etc. and links on Pintrest and Delicious; you can share your professional information on LinkedIn, your video creations on YouTube, Vimeo, and Vine. You get the point. Although there is some redundancy to some of these internet services, they also have enough of their own features that they can be tailored for particular use cases (not to mention slightly different communities and audiences). I've personally found that there are enough differentiating features (at this point at least) to warrant separate posts on separate sites. And what does this all lead to? Hyperactivity, I claim.

source: http://www.coca-colacompany.com/stories/5-tools-for-staying-tech-savvy-in-a-hyper-connected-world

With so many ideas, thoughts, suggestions, and opinions swirling around, a whole world of possibilities opens up to the individual - from digesting all the content that is posted, to posting one's own content. The posts of others inspire one to create and do, and the social interconnectedness - the awareness that your content will be widely seen - drive one to post as well. This self-reinforcing vicious cycle is the perfect breeding ground for creativity and content-creation. We live not just in the information age - we live in the creativity age*. Yes, people have always created before, but now that creations are visible to the whole world, they can stand on the shoulders of creative giants. Ideas are exchanged and evolve at the speed of fiber optics. People hyperactively create.

* side note: because creativity correlates with content-creation here, we're generating significantly more data than ever before; stay tuned for a very intelligent (and creative) Internet of Things!

At this point, the discussion portion of this blog post ends, and I share my excitement for some of the awesomeness on the creative web below. These are the reasons why there are never enough hours in the day, or years in a lifetime. I am constantly inspired by how many different things people master and how creative they can be in all forms of things and activities. The rest of this post can be summarized as #peopleareawesome.

The activities I list below may at first glance seem like a random sampling, but they're a set of activities that are united by the ability to do them without being a total expert (with some practice you can already achieve something!) and the ability to do them on the side (as a hobby, for short periods of time, with limited equipment).

Electronics, robotics, RC

My 15-year-old brother has learned to put together electronics and build RC vehicles, planes, boats, and drones by watching lots of YouTube videos. This is the type of knowledge that no traditional education can deliver at such a density and speed. This creative maker culture is largely driven by being part of a large community of like-minded individuals (no matter what age) that positively reinforce each other via posts, discussions, and likes. An individual not connected to the internet might have a very small community (if any) with much sparser positive reinforcement, which I claim would result in fewer amazing creations.



New art styles

Art is a hobby of mine, and I'm big on constantly trying new things. There's always something new that the web coughs up in this regard, beyond the traditional styles of sketching and painting. For instance, consider these widely diverging artistic styles:

                                                     check out the art of wood burning

and the art of painting on birch bark

and painting by wet felting

and check out this crazy video of candle carving

also check out: 

Scrapbooking and crafting

personal memories and trips can be creatively captured in scrapbooks

Culinary masterpieces

Judging by the popularity of cooking channels, and food-, cooking-, and baking-related tags and posts on different social networks, people love to share the dishes, recipes, and culinary masterpieces that they create. I mean, just look at this:


and themed foods for any occasion: 

Travel blogs and photography

I'm also hugely inspired by all the travel blogs people put together. Not only do they find the time to visit amazing places and capture them from all sorts of beautiful angles, they also blog about it: http://fathomaway.com/slideshow/fathom-2015-best-travel-blogs-and-websites1/

The really creative also put together annotated, narrated, and musical slideshows and videos.

I'm not even going to go into all the amazing photography people do. I will leave you with this:



Data visualization

Both an art and a science, how to visually depict data is very relevant in this day and age. I'm inspired by creativity, once again:


Creative writing

Other than blog writing, I like the idea of creating writing on-the-side to de-stress and get some brain juices flowing - here's some things worth trying and checking out (and possibly submitting to if you're extra adventurous): short SF stories, poetry, funny captions.

Another form of "creative writing" is putting together tutorials, explanations, etc. on all sorts of topics that interest you. It allows you to organize your thoughts and attempt to explain some content with a specific audience in mind. I love to write, explain, and write explanations, but if only there was more time in a day...

How it all ties together

People inspire others by taking photos of their creations and posting them on photo-sharing sites, they create videos of the how-to process to motivate others to try, and they bookmark ideas/links they like. They then blog or tweet or chirp about their process and final products and otherwise share their creations with their social networks and the world. The resulting online interactions (sharing of ideas, discussions, comments, and likes) sparks the next cycle of creativity, and on it goes. (I posted some of the pictures above with the intention of inspiring others to try some new things as well.)

In short, there is no shortage of activities to occupy oneself with if there is some time on the side. Of all the activities and links listed above, I've tried about 70%. I am definitely hyperactive when it comes to creating, and the internet age is fueling a lot of that for me by constantly feeding me new ideas. I believe that when you try new things, you expand your brain (perhaps via the number of new connections/associations you make), which benefits you in many more ways than you first might think. I believe that engaging in all manner of creative activities has long-lasting positive effects on intellectual capability and psychological well-being, and that instead of plopping down statically to watch something, creating something keeps your brain "better-exercised", so to say. 

Tuesday, 14 July 2015

The Experiencers: this is your last job

With the rapid growth of what A.I. is capable of, the rapid advancements of technology (via Kurzweil's Law of Accelerating Returns), the massive reach of the internet and the cloud, the obvious question concerns what the role of humans is to be when even the intellect can be mechanized? I offer my musings on a potential kind of future here: http://web.mit.edu/zoya/www/TheExperiencers_SF.pdf

Thursday, 2 July 2015

where is innovation, and who's pulling who along for the ride?

In the modern landscape of giants like Google and Facebook, and the scurry of activity generated by tech start-ups in the SF and Boston areas and beyond, one of the big questions is: where does academia sit? And how do all these forces shape each other?

Big companies are no longer shaping just the industry world - they are having massive impacts on academia - both directly (by acquiring the best and brightest academics) and indirectly (by influencing what kinds of research directions get funded).



This leaves a few hard questions for academics to think about:
To what extent should industry drive academia and to what extent can academia affect where industry is going?

We can follow, for instance, the big companies - sit closely on their heels, learn about their latest innovations, and project where they're likely to be 5-10 years from now. Then use this knowledge to appropriately tailor funding proposals, to direct research initiatives, and to count on the emerging technologies to fall into place. For instance, if you know that certain sensors are going to be in development in the next few years, does it not make sense to already have ready the applications for those sensors, the algorithms, the methods for processing the data? Or does this build up an inappropriate dependence, turn academics into consumers? Taking this approach, you're likely to win financially in the long run - either via funding (because your proposed projects are tangible) or via having your projects, ideas, or you-yourself acquired by the big guys (and all the advantages that go along with that). However, does this approach squelch innovation - the thinking outside-the-box, outside the tangible, and further into the future?

Importantly, where is innovation coming from most these days? In one of the Google I/O talks this year, there was a projection that more than 50% of solutions will be from startups less than 3 years old in the coming future. Why is this the case? I can think of a number of reasons: bright young graduates of universities like MIT and Stanford are taking their most innovative research ideas and turning them into companies, and this is becoming an increasingly hot trend. More and more of my friends are getting into the start-up sphere, and those that aren't are at least well aware of it. Second of all, many startups are discovering niches for new technologies: whether it's tuning computer vision algorithms to the accuracy required for certain medical applications, applying sensors to developing-world problems like sanitation monitoring, or using data mining for applications where data mining has not been used before. Tuning an application, an algorithm, or an approach to a particular niche requires utmost innovation - that is where you discover that you need to use a computer vision algorithm to achieve an accuracy that was never achieved before, to create a sensor with a lifespan that was not previously imaginable, to make things work fast, make them work on mobile, make them work over unstable network connections, make the batteries last. Academically, you rarely think of all of the required optimizations and corner cases, as long as the proof-of-concept exists (does it ever really?), but in these cases, you have to.

Perhaps we can think of it this way: the big guys are developing the technologies that the others do not have the resources for; the small guys are applying the technologies to different niches; and the academics are scratching their heads over application areas for these technologies and the next-to-emerge technologies - never quite rooted in the "what we have now" and always stuck (or rather, comfortably seated) in the "what if". Who's shaping who? It looks like they're all pulling each other along, sometimes gradually, other times in abrupt jerks. At any given time you might be doing the pulling or be dragged along for the ride.

So where does that leave us? Are big companies, little companies, and academia taking distinctly different routes, or stepping on each other's toes? At this point, I think there is a kind of melting pot without sharp boundaries - a research project slowly transitions into a start-up, which then comes under the ownership of a big company; or a research lab that transplants its headquarters into a big company directly; or the internal organizations like the research labs or advancements labs (Google Research, GoogleX, ATAP) that have the feel of start-ups with the security and backing of a large company. It's a unique time, with everything so malleable. But I'm not sure this triangle-of-a-relationship has reached any sort of equilibrium quite yet... We have yet to wait until the motions stabilize to see where the companies and the universities stand, and whether they will continue to compete in the same divisions, or end up in vastly different leagues.







Wednesday, 24 June 2015

Imagining your imagination

Given the news that are making such a splash recently - "dreaming A.I." and "machines with imagination" (http://googleresearch.blogspot.fr/2015/06/inceptionism-going-deeper-into-neural.html), a few interesting questions are up for pondering...

An NN's (neural network's) "imagination" is a property of the data it has seen and the task it has been trained to do. So an NN trained to recognize buildings will hallucinate buildings in novel images it is given, an NN trained on YouTube videos will discover cats where no cats have ever been, etc.,.. So, an NN trained on my experience, one that sees what I see very day, (and provided it has the machinery to make similar generalizations) should be able to imagine what I would imagine, right?

Facebook and Google and other social services should be jumping on this right now to offer you an app to upload all your photo streams and produce for you "figments of your imagined imagination" or "what your photos reveal about what might be in your mind" (the high-tech NN version of personality quizzes, perhaps). Basically, you can expect the output to be a bizarre juxtaposition of faces and objects and shapes (like in the news article) but customized just for you! Wait for it, I'm sure it's just around the corner.

So if we strap on our GoPros or our Google Glasses and run out into the world hungrily collecting every moment, every sight, and every experience that we live through, can we then hope that our very own personal A.I.s will be able to learn from all this data to remember our dreams when we can't, guess a word off the tip of our tongue, make the same connections, parallels, and metaphors? and know what new thought our mind could have jumped to from the context of the previous conversation? As we envision that A.I. will one day augment us, do we take into account the fact that the augmentation will not be a simple division of labor: "I as the human being will leave the superior, heuristic, and creative tasks to myself, and leave my duller mechanical half to deal with all the storage and lookup and speed that I lack" -- this may be an outdated thought; perhaps "your" A.I. will be able to make bigger generalizations, leap further, find more distant connections, to innovate and create. The correct question should then be: what can YOU contribute to your A.I.?

Thursday, 18 June 2015

CVPR recap and where we're going

The Computer Vision and Pattern Recognition (CVPR) conference was last week in Boston. For the sake of the computer vision folk (at least in my group), I created a summary/highlights document of some paper selections here: http://web.mit.edu/zoya/www/CVPR2015brief.pdf

It takes an hour just to read all the titles of all the sessions - over 120 posters/session, 2 sessions a day, 3 days... and workshops. This field is MONSTROUS in terms of output (and this is only the 20% or so of papers that actually make it to the main conference).
Thus, having a selection of papers instead of all of them becomes at least a tiny bit more manageable.

The selections I made are roughly grouped by topic area, although many papers fit in more than one topic, and multiple might not be optimally grouped - but hey, this is how my brain sees it.

The selection includes posters I went to see, so I can vouch that they are at least vaguely interesting. For some of them I also include a few point-form notes, which are likely to help with navigation even more.

Here's my summary of the whole conference:

I saw a few main lines of work throughout this conference: CNNs applied to computer vision problem X, metric for evaluating CNNs applied to computer vision problem X, new dataset for problem X (many times larger than previous, to allow for application of CNNs to problem X), new way of labeling the data for the new dataset for CNNs.

In summary, CNNs are here to stay. At this conference I think everyone realized how many people are actually working on CNNs... there have been arxiv entries popping up all over, but once you actually find yourself in a room full of CNN-related posters, it really hits you. I think many people also realized how many other groups are working on the exact same problems, thinking about the exact same issues, and planning on the exact same approaches and datasets. It's become quite crowded.

So this year was the CNN hammer applied to just about any vision problem you can think of - setting new baselines and benchmarks left and right. You're working on an old/new problem? Have you tried CNNs? No? The crowd moves on to the next poster that has. Many papers have "deep" or "nets" somewhere in the title, with a cute way of naming models applied to some standard problem (ShapeNets, DeepShape, DeepID, DevNet, DeepContour, DeepEdge, segDeep, ActivityNet). See a pattern? Are these people using vastly different approaches to solve similar problems? Who knows.

So what is the field going to do next year? Solve the same problem with the next hottest architecture? R-CNNs? even deeper? Some new networks with memory and attention modules? More importantly, do results get outdated the moment the papers are submitted because the next best architecture has already been released somewhere on arxiv, waiting for new benchmarking efforts? How do we track whether the numbers we are seeing reported are the latest numbers there are? Are papers really the best format to present this information and communicate progress?

These new trends in the computer vision are leaving us to think about a lot of very hard questions. It's becoming increasingly hard to predict where the field's going in a year, let alone a few years from now.

I think there are two emerging trends right now: more industry influence (all the big names seem to be moving to Google and Facebook), and more neuroscience influence (can the networks tell us more about the brain, and what can we learn about the brain to build better networks?). These two forces are beginning to increasingly shape the field. Thus, closely watching what these two forces have at their disposal might offer glimpses into where we might be going with all of this...





Wednesday, 17 June 2015

The Computer History Museum in SF

The Computer History Museum in SF was great! It was a bit of a random stumble during a trip along the West Coast a few weeks ago, but it left a memorable trace! The collection of artifacts is quite amazing: name just about any time in computer history (ancient history included) and any famous computer (Babbage Engine, Eniac, Enigma, Univac, Cray, etc.) and some part of it is very likely at this museum. We totally assumed the museum would be a 2-hour stopover on the way to other SF sights, but ended up staying until closing, without even having covered all of it.



As a teaser I include a few random bits of the museum that caught my attention (I may have been too engrossed in the rest of the museum to remember taking pictures).

One of the oldest "computers": Antikythera mechanism - had never heard of it before! The Ancient Greeks continue to impress! Shows another timeless quality of humanity: our technological innovations are consistently driven by our need for entertainment (in the case of the Ancient Greeks, such innovations can be linked back to scheduling the Olympic Games). At this museum, there was a full gallery devoted to old calculators and various mechanical computing implements from different cultures.


A fully-working constructed version of Babbage's Difference Engine - completed in 2008 according to Babbage's original designs (which apparently worked like a charm without any modification!) Museum workers crank this mechanical beast up a few times a day for the marvel of the crowd. Once set, this machine can compute logarithms, print them on a rolling receipt, and simultaneously stamp an imprint of the same values into a mold (for later reprinting!) Babbage also thought of what happens when the imprinting fills up the whole mold - a mechanical mechanism halts the whole process, so that the tablet can be replaced! That's some advanced UI, developed without any debugger or user studies.

Based on the over-representation of this Babbage Engine in this post, you can tell that quite a bit of time was spent gawking at it:



By the way, here's a real (previously functional) component from the Univac. Unlike that panel with lights and switches at the top of this post. Apparently, that did not do anything. It was purely for marketing purposes for whenever the then-investors came around to check out this "machine" - much more believable that something real is happening when you have dashboard blinking of some kind and large buttons that serve no purpose but look "very computational". Looks like this continues to be a powerful marketing strategy to this day :)


Just a fun fact (no, this is not the origin of the word "bug", which is what I thought this was at first, but does demonstrate some successful debugging):


The following describes quite a few computer scientists I know:


There was a gallery devoted to Supercomputers and another gallery devoted to computer graphics. Look at what I found there - every Graphics PhD student's rite of passage (by the way, the Intrinsic Images dataset is sitting in my office, no glass case, but we will soon start charging chocolate to see it):


There was also a whole gallery devoted to robots and A.I. (an impressive collection), a gallery devoted to computer games, and a gallery devoted to the Apple computer just to name a few.


By the way, something I didn't know about the Apple computer - here is some awesome bit of marketing that came out in 1984:


There was a gallery devoted to the Google self-driving car. I like how this is in the Computer History museum, because really, you can't put any computer technology in a museum and assume it will remain current for very long. The drone in the corner of that room had a caption that mentioned something about possible future deliveries. Old news. I've seen bigger drones :) 


That's about the extent of the photos I took, because photos really fail to convey the environment that a museum surrounds you with. It is a museum I would gladly recommend!

As an after-thought, it's interesting to visit a "history" museum where you recognize many of the artifacts. Gives you a sense of the timescale of technological innovation which continues to redefine what "history", "progression" and "timescale" really mean... notions that we have to regularly recalibrate to.








Saturday, 13 June 2015

Google I/O Recap

Announcements from Google I/O are increasingly popping up over the media.
Last year, after going to Google I/O I compiled a series of slides about some of the top announcements and some of the other sessions I went to: http://web.mit.edu/zoya/www/googleIOrecap.pdf
This year, I watched many Google I/O videos online, and I've compiled a small summary here: http://web.mit.edu/zoya/www/googleIO2015_small.pdf
As a researcher, I find it instructive to look to where such giants such as Google are moving in order to get a sense of which research directions and developments will be especially in need in the coming while. Thus, I look at the talks from an academic perspective: what are the key research questions surrounding every product? I tried to include some of these in my latest slides.

Tuesday, 2 June 2015

Why Google has the smartest business strategy: openness and the invisible workforce

Google works on an input/output system. In other words, for everything that Google developers create, Google accepts input from users and developers around the world. Note that the latter group/community is orders of magnitudes larger than the former, so by harnessing the resources and power from the users and developers around the world, Google's Global footprint becomes significantly larger.

For instance, Google produces continuos output in the form of products and developer platforms, and accepts input in the form of development directions and most importantly, apps. By creating platforms on which developers can build on top of, Google harnesses the users that want the apps. The more that Google releases (e.g. SDKs), the more developers are looped in to create new apps, and the more users get pulled in to use the apps, thus acquiring the Google products in the process. Thus, the number of people around the world that are increasing the consumer base for Google products far extends past the number of Google employees.

In fact, the number of people indirectly working for Google is huge. Consider the Google Developer Groups (GDGs) that can be found all around the world - independent organizations of developers and enthusiasts that get-together to bond over Google's technology (they also give Google product-related talks and host help sessions for their local communities, all on their own time). What's in it for the members? Members of GDGs have the support and network of individuals with similar interests. Google wins by having a Global network of communities that are self-sufficient and self-reinforcing and do not require Google support or investment. Google Trusted Testers are non-employees that test beta products for Google. What's in it for the testers? First-hand experience with Google products. What's in it for Google? A workforce for whom being "first to try a product" is sufficient reward. The Google Student Ambassador Program gives college students an opportunity to exhibit leadership by acting as a liaison between Google and their home institution, putting on Google-supported events (information sessions, hackathons, etc.) and forming student communities. The student ambassador's motivation is a nice line on their resume and great experience communicating with both industrial and institutional personnel and organizing events. Google wins by being promoted on college campuses and having easier avenues for student recruitment... all for the price of providing some Google-themed freebies at college events. Then there's all the other smaller organizations that are not directly supported by, but have affiliation with, Google. For instance, the Google Anita Borg Alumni Planning Committee that I am part of is devoted to increasing visibility and interest in computer science among minorities and help promote diversity in computer science education. We, as a group of females distributed Globally, start initiatives and put on events (such as the following) in our local communities to advance these missions. Google provides the branding. We win through affiliation with Google, Google wins through affiliation with philanthropic organizations. These are just a few of the organizations and communities that are affiliated with but not directly supported (at least financially) by Google. In fact, Google does not need to directly support or control/govern any of these communities precisely because they are self-sufficient and self-motivated - a big win for Google, given the limited investment.

Now consider the yearly Google I/O conference that draws over 5,000 attendees. Many of these attendees are developers who come to the conference to hear first-hand about new product and platform releases (and participate in hands-on workshops with the Google product developers themselves). These developers then bring this knowledge back to their communities, and contribute their own apps and products to the Google community. Each year, at this conference, Google announces new support infrastructures to make the use of Google products increasingly easier (this year, for instance, Google announced new OS and language support for the Internet of Things so that developers can more easily add mobile support to physical objects - think: the smart home). Correspondingly, the number of Google product-driven apps increases and expands. Users of apps buy Google products and services and continuously provide feedback (either directly through surveys or indirectly by having their interactions and preferences logged on Google servers). Thus, we are all contributors to the growth of the Google footprint.

What can we infer from all of this? Google is firmly rooted in our societies and is here to stay. The number of people supporting, improving, and building on top of Google products is huge - it is Google's invisible workforce. Thus, Google will continue to grow and improve at great speeds.

What lesson can we learn from all of this? Being open (i.e. in terms of software and even hardware) can allow a company to harness the power of other developer and user communities, thus increasing the size of the effective workforce that builds the company's products, directions, and reputation. Google has one heck of a business strategy.


Sunday, 31 May 2015

Freeing humans from redundancy

This has been, and should be, the ultimate goal of mechanizing and automating the world around us. The human mind is too precious of a resource to waste on any types of repeating and repeatable actions, and we've been working on automating the latter since the Industrial Revolution. With modern A.I., more of this is becoming possible.

Consider showing a robot once how to clean the window - defining the physical boundaries, and indicating preferences for the action sequence: you can perform this specification of actions once, the robot repeats this for a specified period of time (e.g. an hour), repeating this sequence at regular intervals (say, once a week). Consider, in such a way, seeding a large variety of actions - watering plants, making mashed potatoes for dinner, tuning the bike, drycleaning your suits, etc. I am not imagining a single robot with the A.I. to do all these tasks (not to mention knowing when to do them) - I am rather imagining an army of simple machines that can be individually programmed by their owners (programming not in the coding sense, but in the show-by-example sense).

You put on your VR (virtual reality) headset while sitting on your hotel bed in SF, you log into your Boston home, seeding a bunch of actions through the FPV cameras on your machines (machine 1 will water the plans on your balcony, machine 2 will set some bread baking, machine 3 will scan some documents for you after finding them in the relevant folders on your shelf). You do the seeding for your cottage country house on Long Island as well. In such a way, not moving off your bed, you have now prepared your Boston home for your arrival tomorrow, and have checked in on your cottage. Here's the critical point: none of these machines has had to be hard-wired for your house or for any of the actions you have assigned them - they simply have the capacity to learn an action and a schedule for it (which does not require any complex A.I. and is completely feasible already). It is up to you to make the difficult human decisions of setting the schedule - when and how much to water the bonsai and the petunias, how long to wait before the bread has just the right crust for your taste, which of your clothes need special attention during dry cleaning, etc. Then the machine executes a repeatable sequence. If a condition arises for which the machine requires a decision to be made, you are pinged. Next time this condition arises, the machine has a stored solution. With time, your army of machines has been customized to all your preferences, and you have been taken out of the loop for anything that does not require an expert opinion (yours) or an indication of specific preferences (yours as well). Your mind becomes freed from anything at all repeatable or redundant, with its capacities available for the decision making and heuristic decisions that are the hallmark of human intelligence. You spend your time delivering instructions and managing outputs with utmost efficiency.

I think we will much sooner see this type of future with simple customizable learning agents than one with the courteous all-in-one robotic butler you see in the movies. In fact, you can already remotely control your house heating via Nest and your music system via Sonos, as two examples, all from your Android devices (cell phone, watch). The next step is simply the augmentation of your control options (from clicks and menus) to actions with as many degrees of freedom as your arm gestures permit via a VR device. This puts a larger portion of your household devices and tasks at your virtual fingertips. The Internet of Things is here.

Thursday, 30 April 2015

Filling the internet with more cool science

So the second year of the extreme science SciEx video competition has come to an end, with a new series of cool videos to show for it: http://sciex.mit.edu/videos/. The goal of this initiative is to take more direct steps towards making science and engineering catchier and more (socially) shareable than say... videos of cute kittens. Videos related to science should not be left to sit in some isolated corner of the internet, to be found only by people who were already looking for them... rather, they should be sprinkled around in every shape and form - extreme, cool, artistic, you name it. They should be able to affect, in some positive way, all sorts of individuals with diverse interests and personalities. Why? Because science teaches us to think, and individuals capable of thinking makes society a better place. More sharing of science will lead to a greater appreciation, at least a little bit of extra understanding, and a potential reduction of ignorance, about science. If nothing more, by actively sharing science we promote positive attitudes towards it (even if we don't always succeed at longer-ranged interest)... and positive attitudes lead to positive change... in people, in society, and in government. We need to educate the next generation of scientists, and we can contribute to this mission in the subtlest of ways and by taking small steps - including having more science and engineering videos circulating on the web. Let's show the world (the next generations) that science can be as extreme as an extreme sport, as beautiful as a work of art, as releasing as a dance. Let's celebrate scientists and engineers for the rockstars that they are. Let's cheer for them louder than we cheer for football players - because they are the ones changing the world we live in.

Sunday, 15 February 2015

Educational nuggets

Quite a while back I attended an MIT Task Force Retreat on Digital Learning. Numerous talks and discussions were given (by various internal MIT groups and committees) about the future of online education and the issues surrounding it. One concept that stood out to me was that of "educational nuggets". I think this is very suitable (and sticky) terminology to describe the bite-sized educational modules - like the 5-10 minute lectures - that have become a popular medium for online courseware and educational websites such as Khan Academy.

The idea of bite-sized lectures comes from educational research showing that a student's attention span does not extend much, past about 10 minutes. A sad truth. That is not to say, however, that a student can not internalize concepts past 10 minutes (if that was really the case, then school systems would not work at all). Rather, efficiency of learning goes down, and more mental effort needs to be expended to stay attentive - instead of, say, all that mental effort being channelled to learning the concepts.

So, it seems to be most effective to present material to students for about 10 minutes at a time, and then break up the stream by giving students some time to think about the concepts, asking students questions (or asking for their questions), providing a quiz module, initiating a discussion (if applicable), etc. This allows students to more actively internalize the material, apply the concepts, and check that they have understood the past 10 minutes worth of content.

The additional advantage of splitting educational material into nuggets is to break up a course into little, self-encapsulated, independent units. To go back to the previous post, this provides a means for customization: both for the individual student, and for the individual course. Imagine the course of the future: you are a biologist looking to brush up on statistics. Instead of pointing you to a full course offered by the statistics department, or instead of having to specifically design a course on statistics in the biology department, you could be given a set of "nuggets" to complete. These nuggets could come from different places - from the statistics department, from the math department, from the biology department - such that when they all come together, they give you - the biologist - the statistics knowledge you need, in the right context, with maximal relevance.

The concept of educational nuggets naturally raises some questions: is everything really nugget-izable? what about basic courses like calculus that need to be taken in full? who will decide what goes into a nugget? can many small nuggets really be equivalent to a course?
I think if we become more accepting of this form of education and the benefits that we can glean from it, then the answers to these questions will start to emerge through discussions.

The bigger philosophical question is whether we are changing too much, as a human species, and becoming too ADD with all the bite-sized facts, bite-sized tweets, bite-sized news, and potentially bite-sized education thrown at us. Like many things, this is a two-edged sword -- and like the related notion of multitasking, can either make or break productivity and long-term memories and understanding. The related benefits/downsides of multitasking will be left for a future post...

Sunday, 8 February 2015

Education as customized paths through knowledge graphs

Lately very frequently I've been involved with, and witness to, discussions about the upsides/downsides of online learning in comparison to traditional classroom learning. I'd like to summarize a few of my main views on this point.

The traditional classroom has a 1:N ratio of teachers to students, where N grows large for many basic-level courses. Tutoring can provide a 1:1 ratio, and has been found (by multiple quantitative studies) to be more successful at getting concepts across to students. Why? Tutoring provides customization to the individual, and thus can build off of the knowledge base of that individual. New information can hook onto any existing understanding the individual already has, and this is what can let the concepts stick. New concepts become more tightly intertwined with what the individual already knows (and perhaps cares about), and are thus more relevant than concepts presented in the most general setting, with no customization.

In a recent talk of Peter Norvig's that I went to (Norvig is originator of MOOCs: massive open online courses), he indicated that even artificial tutoring systems can have the same benefits as human tutors, with statistically-significant benefits over the traditional classroom. This is very promising, because artificial tutoring is a potentially infinite resource (unlike the finite number of good-quality human tutors). In the same talk, Norvig put up a slide of a dense knowledge graph of all the information that can be available to a student on a particular topic in a particular course(s). He drew some squiggly lines through this graph, standing in for unique paths that could be taken through that material. This is the same visual representation of customized learning that I envision, and deeply believe in, for the future of education.

There is no reason why different individuals should take the same paths through learning. Different types of information may be relevant to different people, and a different ordering of material may make more sense to some individuals but not others. Naturally it should be possible to constrain which points an individual should definitely pass through for a particular course/subject matter (to cover the fundamentals), but the paths themselves should be less constrained. This is the diversity that I referred to in my previous post, which is why I believe that online education is the way forward.

We already have almost all the tools to make this a reality: (1) sophisticated machine learning algorithms that can pick up on trends in user data, detect clusters of similarly-behaving individuals, and make predictions about user preferences; (2) thorough user data through logging and cloud storage, integration of physical and virtual presence and social networks, integration of all of a user's applications and data (and the future of the "internet of things"), universal login systems, etc.

Thus, the question is only one of time.

Tuesday, 3 February 2015

The Popping Rate of Knowledge

I use the term "popping rate" to refer to the amount of novel/interesting/useful material gained in a given time period. If you're used to microwaveable popcorn, you know that as the popping rate decreases past a certain point and pops become rarer - if you don't take the popcorn out of the microwave, it will fry. I think the same goes for my brain when it is trying to suck up knowledge. If I'm watching a really interesting documentary, reading a good nonfiction book, or listening to a captivating talk, I can almost feel the new knowledge and facts pop and fill my brain. I consider the time well-spent if the popping rate is above a certain threshold... however, if the pops become too rare, I feel my brain frying under lack of stimulation. That is when I know to turn off the TV, put down the book, or zone out of the talk... and pursue an activity with a higher popping rate.

In fact, I've found that quantifying the informational/factual content of something using some notion of pops per minute or pops per hour (referring to # of novel bits of information/facts gained during that time), provides a useful frame of comparison between activities.

Saturday, 31 January 2015

Increasing diversity in computer science

I recently organized a panel of MIT computer science researchers to answer questions about computer science to an audience of high-schoolers (http://web.mit.edu/cs-visit-day/qa.html). A lot of interesting discussions came out of it, and a lot to digest for panelists and audience alike.

One of the things that did stick out to me was how many of the panelists did not like their first formal training in computer science (in school, in college). I can't say I was terribly surprised, and I'd be interested to see such a survey done of the broader CS community (e.g. at a research institution) to poll about initial experiences and attitudes.

Here is where I think the problem lies: computer science courses tend to cater to a very narrow audience - maybe the type of audience that likes computer/video games, or the type of audience that likes tech gadgets, etc. Not that you can avoid it: you have to start somewhere - with some (salient) example or application or first program. But once you've settled on something, you might get one group of individuals hooked, but you'll also automatically repel a lot of other people (who are not interested in the application area you've chosen). If that was the only/first opportunity those people had to learn computer science, they might decide they don't like it and never pick it up again - which would be an enormous shame!

What's the solution? Introducing more variety and choice into the computer science curriculum - tailoring it to different tastes (and personalities!). Cater it to people who might like biology or psychology or architecture or design, and show them that computer science can provide them with a toolset, a simulation/virtual environment to test their ideas, a cool exploratory possibility. I believe this is the way forward to increasing the diversity of people in the field of computer science.

In practice, having a lot of variety in a computer science curriculum may not be possible (consider a school with a single programming course and a single teacher to teach it)... in this case, I think online education with its possibilities for individual customization, can come to the rescue... more about this later.