Wednesday, 30 December 2015

"Computer Behind Pixar": Teaching Computational Thinking for 3D Modeling and Representation

How do you teach a group of middle- or high-schoolers about computer graphics without setting them down in a computer lab or showing them code? How do you teach them about 3D geometry without writing down a single mathematical formula on the board? And how, without doing all these things, can you nevertheless equip them with the vocabulary and intuition to be able to discuss and understand concepts in computer graphics, geometry, and representation?
Pinart toy: what better way to explain height fields?

That was our goal, and our chosen plan of attack was to flood the senses: let our students touch and explore physical models, work through group activities, watch video clips, participate in class discussions, and see demos. We filled our classroom with 3D printed models of various materials, faceted animal shapes, wooded mannequins, pin art boards, crayons and fuzzy pipe cleaners. 
What do all these objects have in common?

These physical models serve as examples and applications of different representational choices, including voxel grids, meshes, and height fields. Having physical examples to point to and explore can launch a discussion of different representational (3D modeling) choices.

Splash 2015 @ MIT

On November 22, 2015, Hijung Valentina Shin, Adriana Shulz, and I taught a 2-hour high-school class as part of MIT's yearly Splash! program - a Fall weekend during which thousands of high-schoolers flood hundreds of MIT's classrooms to be taught anything and everything. 

In our Splash! classroom, we sought to ask and answer the following questions: How is an animated character created? How can we represent different types of 3D structures? What kind of modeling decisions are made for a special effects film? What techniques do anthropological reconstruction, 3D printing, and game design have in common?

Importantly, we believed that these questions could be answered on an intuitive level, with no mathematical prerequisites. What better way to motivate the study of the mathematical and computational sciences than to give students a faint whiff of the awesome things they would be able to accomplish and think about in greater depth if armed with the right tools? 

Computational thinking to the rescue!

Here I will briefly outline the structure of our 2-hour class and the decisions made along the way, to provide possible inspiration for similar classroom activities and lessons. For the benefit of others, we have made all our slides available online.

Coding without coding

Target shape that one student described to the other using only
a set of provided primitives: colored squares, line segments, or
polygonal shapes.
Our ice-breaker activity first introduced the concepts of representational primitives and algorithmic decisions. Students split up into pairs, armed with grids and sketching utensils (colored crayons or pencils). One student was given a target shape, a set of primitives, and instructions. The goal was to supply one's partner with a sufficient and clear recipe to reproduce the target shape as accurately as possible. Some students could only specify one grid cell at a time with coordinates and a target color. Another set of instructions armed students with a ruler and the ability to specify starting and ending coordinates of line segments. A third group of students had polygonal shape rulers – e.g. triangles, squares, circles. Students could tell their partners to center a shape at specific coordinates.
Polygonal primitives
(ordered on Amazon)

Overall, we gave different student pairs different primitives:
  • pixels (colored squares)
  • line segments
  • polygonal shapes
We gave all students the same amount of time to complete this activity in pairs (15 minutes), after which students showed off their creations to their partners and other students in the class. These creations were hung around the classroom at the amusement of the students.

This gave us a great launching pad for discussion about the trade-offs between representational accuracy and algorithmic efficiency. We asked students: What did you find easy and hard? Were there parts of the shape that were well represented by your primitives? Could everything be represented by the primitives? What took you the longest? How many individual primitives did you end up using?

This kind of activity (or variants of it) is a good intro to programming activity, as students have to think about formalizing clear step-by-step instructions for their partner to carry out. The full instructions and templates for our activity are included here.


Computer behind Pixar

Inspired by the recent hype around Pixar* and particularly Boston Museum of Science's temporary Pixar exhibit, we called our class "Computer behind Pixar". The common goal of the exhibit and other educational media about Pixar is to hook in the general public with the beloved animations and characters for the purpose of introducing and motivating the underlying mathematical and scientific concepts. In fact, Mike from Monsters Inc. served as a repeating element throughout our activities, though we branched beyond Pixar, and beyond animation more generally. 





* Reference links on the topic of math behind Pixar:



We described and showed a video about the rendering pipeline*, and drew attention to the importance of modeling at the core of this pipeline, as the initial step that all future steps crucially depend on. We defined modeling as a mathematical representation composed of primitives
The rest of our discussion centered around different representational choices and their properties.





* More rendering resources:
Video about rendering in Pixar
Article about rendering in "Inside Out"
Character rendering (dark knight)
Rendering pipeline summary




Tangible examples of 3D representations


3D printed models are a tangible
demonstration of discretization and
the resolution issue.

Voxel grids

We introduced the concept of discretization, necessary for the representation of shapes in digital computers: 2D shapes as pixels and 3D shapes as voxels. We reminded students of the ice-breaker activity where grid cells were used as primitives.   
We then discussed voxel grids as one form of representation for 3D objects, commonly used for 3D printing. We talked about the resolution issue: the trade-off between accuracy and efficiency. We passed around physical 3D printed models at various resolutions, similar to the models pictures on the right.

Physical models to demonstrate the
differences between volumetric and
boundary representations. One is much
lighter! Why? It requires less material
to represent (and store).

Triangular meshes

In talking about efficiency, we introduced the notion of boundary representations, specifically meshes, for representing 3D objects without having to represent and explicitly store all the internal voxels (the volume). 
We connected the boundary representation to the ice-breaker activity, where in 2D, line segments were used to represent the target shape's boundary. We then showed students a demo of MeshLab, and passed around physical examples of volumetric and boundary representations.

CSG


We moved on to discuss how simple shapes can be combined with different operations to create more complex shapes, in 3D via constructive solid geometry (CSG). We reminded students that the ice-breaker activity also contained polygonal primitives in 2D. For 3D, we showed students a demo of OpenScad and discussed primitive operations (union, intersection, difference, ...) that can be performed on shapes. Applications in manufacturing were discussed. 

Height Fields

Heigh fields were introduced with the help of pin art boards, as pictured at the beginning of this article. Students played with the pin boards and considered again the concepts of discretization and the representation issue. We asked students: which kind of shapes or surfaces can be represented this way and which can not?

Procedural Modeling

The grass in Pixar's Brave was created with procedural modeling,
using parametric curves and randomness.
A great hands-on demo of this kind of modeling can be found on:
 Khan Academy's Pixar-in-a-Box.
We discussed how shapes could be created by specifying procedures on primitives (aside from the primitive operations in CSG). We showed demos of solids of revolution (what better way to motivate the concept that for most students appears for the first time only in college calculus?).   We discussed how procedures like revolution and extrusion can be performed along different paths to create all sorts of complex shapes. We discussed how these paths can be further parametrized so that the revolution or extrusion procedure changes along the path. We introduced randomness as another concept that can be used to add variability to the representation.
We discussed applications to modeling trees, forests, grassy fields, crowds, and cities.

3D Representation        Primitives                Operations (recipe)                   
Voxel grids                   Voxels                     Material specification for each voxel
Triangle mesh               Triangles                 List of triangles with locations
CSG                              Basic shapes            CSG operations (union, intersection, etc.)
Height field                  Points with height    Assignment of heights to points
Procedural model         Basic shapes            Procedure (e.g. extrusion along path)


A new way to look at things

With our class, we hoped to give students a look at the modeling decisions that underly all the animated films, video games, and special effects they see on a daily basis. We wrapped up our class with a thought exercise, putting students in the position of making decisions about how to model different objects. We told them to think about the different representations we discussed: the primitives and operations required. We told them to consider the trade-off between accuracy and efficiency. Given a representation, we also told them to think about its usability - what kind of use cases are being considered, e.g. whether the modeled object needs to be animated and how. Students were asked to brainstorm how they would model the following objects: buildings, cities, fabric, hair, grass, water. Along the way, we showed them image and video demos (all these links can be found in the slides). We passed around more physical models. Together, we watched a video "behind special effects" that showcased the kinds of 3D models used in movies, a great visual review of the many representations covered in our class. We told students to look around and realize that 3D modeling decisions underlie many other applications: special effects in films, video games, simulations, anthropological reconstructions, product design, urban planning, robotics, and 3D printing. To be reminded that they have been armed with a new way to look at things, students took home polygonal stickers.



Monday, 24 August 2015

Hyperconnectedness leads to hyperactivity

Although the term "hyperconnected" already exists, I will use the following user-centric definition: the embedding of an individual within the internet - the individual's omnipresence on the web. People have all sorts of outlets for posting, storing, and sharing all sorts of content: for example, you can post your photos on Facebook, Google Photos, Instagram, Flikr, Snapchat, etc.; you can blog on Blogger, Wordpress, Tumblr, etc.; you can write about articles, news, your day and your thoughts on Twitter, Facebook, Google+, etc.; you can exchange information on Quora, Reddit, etc. and links on Pintrest and Delicious; you can share your professional information on LinkedIn, your video creations on YouTube, Vimeo, and Vine. You get the point. Although there is some redundancy to some of these internet services, they also have enough of their own features that they can be tailored for particular use cases (not to mention slightly different communities and audiences). I've personally found that there are enough differentiating features (at this point at least) to warrant separate posts on separate sites. And what does this all lead to? Hyperactivity, I claim.

source: http://www.coca-colacompany.com/stories/5-tools-for-staying-tech-savvy-in-a-hyper-connected-world

With so many ideas, thoughts, suggestions, and opinions swirling around, a whole world of possibilities opens up to the individual - from digesting all the content that is posted, to posting one's own content. The posts of others inspire one to create and do, and the social interconnectedness - the awareness that your content will be widely seen - drive one to post as well. This self-reinforcing vicious cycle is the perfect breeding ground for creativity and content-creation. We live not just in the information age - we live in the creativity age*. Yes, people have always created before, but now that creations are visible to the whole world, they can stand on the shoulders of creative giants. Ideas are exchanged and evolve at the speed of fiber optics. People hyperactively create.

* side note: because creativity correlates with content-creation here, we're generating significantly more data than ever before; stay tuned for a very intelligent (and creative) Internet of Things!

At this point, the discussion portion of this blog post ends, and I share my excitement for some of the awesomeness on the creative web below. These are the reasons why there are never enough hours in the day, or years in a lifetime. I am constantly inspired by how many different things people master and how creative they can be in all forms of things and activities. The rest of this post can be summarized as #peopleareawesome.

The activities I list below may at first glance seem like a random sampling, but they're a set of activities that are united by the ability to do them without being a total expert (with some practice you can already achieve something!) and the ability to do them on the side (as a hobby, for short periods of time, with limited equipment).

Electronics, robotics, RC

My 15-year-old brother has learned to put together electronics and build RC vehicles, planes, boats, and drones by watching lots of YouTube videos. This is the type of knowledge that no traditional education can deliver at such a density and speed. This creative maker culture is largely driven by being part of a large community of like-minded individuals (no matter what age) that positively reinforce each other via posts, discussions, and likes. An individual not connected to the internet might have a very small community (if any) with much sparser positive reinforcement, which I claim would result in fewer amazing creations.



New art styles

Art is a hobby of mine, and I'm big on constantly trying new things. There's always something new that the web coughs up in this regard, beyond the traditional styles of sketching and painting. For instance, consider these widely diverging artistic styles:

                                                     check out the art of wood burning

and the art of painting on birch bark

and painting by wet felting

and check out this crazy video of candle carving

also check out: 

Scrapbooking and crafting

personal memories and trips can be creatively captured in scrapbooks

Culinary masterpieces

Judging by the popularity of cooking channels, and food-, cooking-, and baking-related tags and posts on different social networks, people love to share the dishes, recipes, and culinary masterpieces that they create. I mean, just look at this:


and themed foods for any occasion: 

Travel blogs and photography

I'm also hugely inspired by all the travel blogs people put together. Not only do they find the time to visit amazing places and capture them from all sorts of beautiful angles, they also blog about it: http://fathomaway.com/slideshow/fathom-2015-best-travel-blogs-and-websites1/

The really creative also put together annotated, narrated, and musical slideshows and videos.

I'm not even going to go into all the amazing photography people do. I will leave you with this:



Data visualization

Both an art and a science, how to visually depict data is very relevant in this day and age. I'm inspired by creativity, once again:


Creative writing

Other than blog writing, I like the idea of creating writing on-the-side to de-stress and get some brain juices flowing - here's some things worth trying and checking out (and possibly submitting to if you're extra adventurous): short SF stories, poetry, funny captions.

Another form of "creative writing" is putting together tutorials, explanations, etc. on all sorts of topics that interest you. It allows you to organize your thoughts and attempt to explain some content with a specific audience in mind. I love to write, explain, and write explanations, but if only there was more time in a day...

How it all ties together

People inspire others by taking photos of their creations and posting them on photo-sharing sites, they create videos of the how-to process to motivate others to try, and they bookmark ideas/links they like. They then blog or tweet or chirp about their process and final products and otherwise share their creations with their social networks and the world. The resulting online interactions (sharing of ideas, discussions, comments, and likes) sparks the next cycle of creativity, and on it goes. (I posted some of the pictures above with the intention of inspiring others to try some new things as well.)

In short, there is no shortage of activities to occupy oneself with if there is some time on the side. Of all the activities and links listed above, I've tried about 70%. I am definitely hyperactive when it comes to creating, and the internet age is fueling a lot of that for me by constantly feeding me new ideas. I believe that when you try new things, you expand your brain (perhaps via the number of new connections/associations you make), which benefits you in many more ways than you first might think. I believe that engaging in all manner of creative activities has long-lasting positive effects on intellectual capability and psychological well-being, and that instead of plopping down statically to watch something, creating something keeps your brain "better-exercised", so to say. 

Tuesday, 14 July 2015

The Experiencers: this is your last job

With the rapid growth of what A.I. is capable of, the rapid advancements of technology (via Kurzweil's Law of Accelerating Returns), the massive reach of the internet and the cloud, the obvious question concerns what the role of humans is to be when even the intellect can be mechanized? I offer my musings on a potential kind of future here: http://web.mit.edu/zoya/www/TheExperiencers_SF.pdf

Thursday, 2 July 2015

where is innovation, and who's pulling who along for the ride?

In the modern landscape of giants like Google and Facebook, and the scurry of activity generated by tech start-ups in the SF and Boston areas and beyond, one of the big questions is: where does academia sit? And how do all these forces shape each other?

Big companies are no longer shaping just the industry world - they are having massive impacts on academia - both directly (by acquiring the best and brightest academics) and indirectly (by influencing what kinds of research directions get funded).



This leaves a few hard questions for academics to think about:
To what extent should industry drive academia and to what extent can academia affect where industry is going?

We can follow, for instance, the big companies - sit closely on their heels, learn about their latest innovations, and project where they're likely to be 5-10 years from now. Then use this knowledge to appropriately tailor funding proposals, to direct research initiatives, and to count on the emerging technologies to fall into place. For instance, if you know that certain sensors are going to be in development in the next few years, does it not make sense to already have ready the applications for those sensors, the algorithms, the methods for processing the data? Or does this build up an inappropriate dependence, turn academics into consumers? Taking this approach, you're likely to win financially in the long run - either via funding (because your proposed projects are tangible) or via having your projects, ideas, or you-yourself acquired by the big guys (and all the advantages that go along with that). However, does this approach squelch innovation - the thinking outside-the-box, outside the tangible, and further into the future?

Importantly, where is innovation coming from most these days? In one of the Google I/O talks this year, there was a projection that more than 50% of solutions will be from startups less than 3 years old in the coming future. Why is this the case? I can think of a number of reasons: bright young graduates of universities like MIT and Stanford are taking their most innovative research ideas and turning them into companies, and this is becoming an increasingly hot trend. More and more of my friends are getting into the start-up sphere, and those that aren't are at least well aware of it. Second of all, many startups are discovering niches for new technologies: whether it's tuning computer vision algorithms to the accuracy required for certain medical applications, applying sensors to developing-world problems like sanitation monitoring, or using data mining for applications where data mining has not been used before. Tuning an application, an algorithm, or an approach to a particular niche requires utmost innovation - that is where you discover that you need to use a computer vision algorithm to achieve an accuracy that was never achieved before, to create a sensor with a lifespan that was not previously imaginable, to make things work fast, make them work on mobile, make them work over unstable network connections, make the batteries last. Academically, you rarely think of all of the required optimizations and corner cases, as long as the proof-of-concept exists (does it ever really?), but in these cases, you have to.

Perhaps we can think of it this way: the big guys are developing the technologies that the others do not have the resources for; the small guys are applying the technologies to different niches; and the academics are scratching their heads over application areas for these technologies and the next-to-emerge technologies - never quite rooted in the "what we have now" and always stuck (or rather, comfortably seated) in the "what if". Who's shaping who? It looks like they're all pulling each other along, sometimes gradually, other times in abrupt jerks. At any given time you might be doing the pulling or be dragged along for the ride.

So where does that leave us? Are big companies, little companies, and academia taking distinctly different routes, or stepping on each other's toes? At this point, I think there is a kind of melting pot without sharp boundaries - a research project slowly transitions into a start-up, which then comes under the ownership of a big company; or a research lab that transplants its headquarters into a big company directly; or the internal organizations like the research labs or advancements labs (Google Research, GoogleX, ATAP) that have the feel of start-ups with the security and backing of a large company. It's a unique time, with everything so malleable. But I'm not sure this triangle-of-a-relationship has reached any sort of equilibrium quite yet... We have yet to wait until the motions stabilize to see where the companies and the universities stand, and whether they will continue to compete in the same divisions, or end up in vastly different leagues.







Wednesday, 24 June 2015

Imagining your imagination

Given the news that are making such a splash recently - "dreaming A.I." and "machines with imagination" (http://googleresearch.blogspot.fr/2015/06/inceptionism-going-deeper-into-neural.html), a few interesting questions are up for pondering...

An NN's (neural network's) "imagination" is a property of the data it has seen and the task it has been trained to do. So an NN trained to recognize buildings will hallucinate buildings in novel images it is given, an NN trained on YouTube videos will discover cats where no cats have ever been, etc.,.. So, an NN trained on my experience, one that sees what I see very day, (and provided it has the machinery to make similar generalizations) should be able to imagine what I would imagine, right?

Facebook and Google and other social services should be jumping on this right now to offer you an app to upload all your photo streams and produce for you "figments of your imagined imagination" or "what your photos reveal about what might be in your mind" (the high-tech NN version of personality quizzes, perhaps). Basically, you can expect the output to be a bizarre juxtaposition of faces and objects and shapes (like in the news article) but customized just for you! Wait for it, I'm sure it's just around the corner.

So if we strap on our GoPros or our Google Glasses and run out into the world hungrily collecting every moment, every sight, and every experience that we live through, can we then hope that our very own personal A.I.s will be able to learn from all this data to remember our dreams when we can't, guess a word off the tip of our tongue, make the same connections, parallels, and metaphors? and know what new thought our mind could have jumped to from the context of the previous conversation? As we envision that A.I. will one day augment us, do we take into account the fact that the augmentation will not be a simple division of labor: "I as the human being will leave the superior, heuristic, and creative tasks to myself, and leave my duller mechanical half to deal with all the storage and lookup and speed that I lack" -- this may be an outdated thought; perhaps "your" A.I. will be able to make bigger generalizations, leap further, find more distant connections, to innovate and create. The correct question should then be: what can YOU contribute to your A.I.?

Thursday, 18 June 2015

CVPR recap and where we're going

The Computer Vision and Pattern Recognition (CVPR) conference was last week in Boston. For the sake of the computer vision folk (at least in my group), I created a summary/highlights document of some paper selections here: http://web.mit.edu/zoya/www/CVPR2015brief.pdf

It takes an hour just to read all the titles of all the sessions - over 120 posters/session, 2 sessions a day, 3 days... and workshops. This field is MONSTROUS in terms of output (and this is only the 20% or so of papers that actually make it to the main conference).
Thus, having a selection of papers instead of all of them becomes at least a tiny bit more manageable.

The selections I made are roughly grouped by topic area, although many papers fit in more than one topic, and multiple might not be optimally grouped - but hey, this is how my brain sees it.

The selection includes posters I went to see, so I can vouch that they are at least vaguely interesting. For some of them I also include a few point-form notes, which are likely to help with navigation even more.

Here's my summary of the whole conference:

I saw a few main lines of work throughout this conference: CNNs applied to computer vision problem X, metric for evaluating CNNs applied to computer vision problem X, new dataset for problem X (many times larger than previous, to allow for application of CNNs to problem X), new way of labeling the data for the new dataset for CNNs.

In summary, CNNs are here to stay. At this conference I think everyone realized how many people are actually working on CNNs... there have been arxiv entries popping up all over, but once you actually find yourself in a room full of CNN-related posters, it really hits you. I think many people also realized how many other groups are working on the exact same problems, thinking about the exact same issues, and planning on the exact same approaches and datasets. It's become quite crowded.

So this year was the CNN hammer applied to just about any vision problem you can think of - setting new baselines and benchmarks left and right. You're working on an old/new problem? Have you tried CNNs? No? The crowd moves on to the next poster that has. Many papers have "deep" or "nets" somewhere in the title, with a cute way of naming models applied to some standard problem (ShapeNets, DeepShape, DeepID, DevNet, DeepContour, DeepEdge, segDeep, ActivityNet). See a pattern? Are these people using vastly different approaches to solve similar problems? Who knows.

So what is the field going to do next year? Solve the same problem with the next hottest architecture? R-CNNs? even deeper? Some new networks with memory and attention modules? More importantly, do results get outdated the moment the papers are submitted because the next best architecture has already been released somewhere on arxiv, waiting for new benchmarking efforts? How do we track whether the numbers we are seeing reported are the latest numbers there are? Are papers really the best format to present this information and communicate progress?

These new trends in the computer vision are leaving us to think about a lot of very hard questions. It's becoming increasingly hard to predict where the field's going in a year, let alone a few years from now.

I think there are two emerging trends right now: more industry influence (all the big names seem to be moving to Google and Facebook), and more neuroscience influence (can the networks tell us more about the brain, and what can we learn about the brain to build better networks?). These two forces are beginning to increasingly shape the field. Thus, closely watching what these two forces have at their disposal might offer glimpses into where we might be going with all of this...





Wednesday, 17 June 2015

The Computer History Museum in SF

The Computer History Museum in SF was great! It was a bit of a random stumble during a trip along the West Coast a few weeks ago, but it left a memorable trace! The collection of artifacts is quite amazing: name just about any time in computer history (ancient history included) and any famous computer (Babbage Engine, Eniac, Enigma, Univac, Cray, etc.) and some part of it is very likely at this museum. We totally assumed the museum would be a 2-hour stopover on the way to other SF sights, but ended up staying until closing, without even having covered all of it.



As a teaser I include a few random bits of the museum that caught my attention (I may have been too engrossed in the rest of the museum to remember taking pictures).

One of the oldest "computers": Antikythera mechanism - had never heard of it before! The Ancient Greeks continue to impress! Shows another timeless quality of humanity: our technological innovations are consistently driven by our need for entertainment (in the case of the Ancient Greeks, such innovations can be linked back to scheduling the Olympic Games). At this museum, there was a full gallery devoted to old calculators and various mechanical computing implements from different cultures.


A fully-working constructed version of Babbage's Difference Engine - completed in 2008 according to Babbage's original designs (which apparently worked like a charm without any modification!) Museum workers crank this mechanical beast up a few times a day for the marvel of the crowd. Once set, this machine can compute logarithms, print them on a rolling receipt, and simultaneously stamp an imprint of the same values into a mold (for later reprinting!) Babbage also thought of what happens when the imprinting fills up the whole mold - a mechanical mechanism halts the whole process, so that the tablet can be replaced! That's some advanced UI, developed without any debugger or user studies.

Based on the over-representation of this Babbage Engine in this post, you can tell that quite a bit of time was spent gawking at it:



By the way, here's a real (previously functional) component from the Univac. Unlike that panel with lights and switches at the top of this post. Apparently, that did not do anything. It was purely for marketing purposes for whenever the then-investors came around to check out this "machine" - much more believable that something real is happening when you have dashboard blinking of some kind and large buttons that serve no purpose but look "very computational". Looks like this continues to be a powerful marketing strategy to this day :)


Just a fun fact (no, this is not the origin of the word "bug", which is what I thought this was at first, but does demonstrate some successful debugging):


The following describes quite a few computer scientists I know:


There was a gallery devoted to Supercomputers and another gallery devoted to computer graphics. Look at what I found there - every Graphics PhD student's rite of passage (by the way, the Intrinsic Images dataset is sitting in my office, no glass case, but we will soon start charging chocolate to see it):


There was also a whole gallery devoted to robots and A.I. (an impressive collection), a gallery devoted to computer games, and a gallery devoted to the Apple computer just to name a few.


By the way, something I didn't know about the Apple computer - here is some awesome bit of marketing that came out in 1984:


There was a gallery devoted to the Google self-driving car. I like how this is in the Computer History museum, because really, you can't put any computer technology in a museum and assume it will remain current for very long. The drone in the corner of that room had a caption that mentioned something about possible future deliveries. Old news. I've seen bigger drones :) 


That's about the extent of the photos I took, because photos really fail to convey the environment that a museum surrounds you with. It is a museum I would gladly recommend!

As an after-thought, it's interesting to visit a "history" museum where you recognize many of the artifacts. Gives you a sense of the timescale of technological innovation which continues to redefine what "history", "progression" and "timescale" really mean... notions that we have to regularly recalibrate to.