My Tech Projects

Last updated: Man-O-War Cay, 13 Sep 2012 Contact:
  Current downloads  /  Notes
I'd like to play around with some microcontrollers.  These are small, portable, low- power processors dedicated to a particular task.  Here is a short list.  These look interesting:
  1. Arduino
       Pros:  Inexpensive, rich ecosystem and community,
    (likely) low power consumption, looks like fun.
       Cons: Relatively slow (16 MHz)
  2. Gumstix
       Pros:  Powerful processor, lots of memory, compact
    size, runs Linux.
       Cons: Power hungry?  TBD.
  3. Raspberry Pi and BeagleBone
    Man-O-War 15 Sep 2012  (I ordered one of each)
       Pros:  Apparently good speed (RasPi:1GHz, Bone:
    720MHz ARM), strong following, Linux & Android.
       Cons: Don't know yet
       Notes: Here are my RasPi notes
  On my hand are:
      a Gumstix Fire,
      an Arduino Nano, and
      an Arduino Duemilanove
  In the background is:
      a Tobi expansion board
      for the Gumstix
   along with an iPod Touch and Apple's Accessory API*
 
I'm also interested in the various ways to organize, index, and generally analyze my website (well, websites in general).  I'd like to play around with semantic web constructs.  E.g. how are my photos of fishing boats in Peru related to photos of my dinghy and sail placement and graphs of sailboat characteristics?  I'd like to describe each of those with semantic markup linked to (public and private) ontologies, then graphically link and navigate (pun there?) between them.  Something for the future.  In the meantime, below is a HTML grapher that I found and am starting to fiddle with.
 
The projects:
  1. Hello World Remote Controller
  2. Camera Controller
  3. Stereo Webcam
  4. Boat Monitors
  5. Simple Logic Analyzer
  6. System Load Monitor displayed on my iPod
  7. HTML Grapher
  8. Monitor the battery on my Macbook
  9. An interactive sailboat characteristics plotter (like these plots but well.. interactive)
  10. Port of Traer's Physics library to Processing.js
  11. Dromaeo (a Javascript test tool) Frontend
  12. Man-O-War Heritage Museum Website
  13. Mac Notes
  14. My Raspberry Pi Projects
  15.  

 
 Hello World Remote Controller - Notes
Man-O-War, 25 Oct 2011 Download
The notion is to control the Arduino LED using an iPod Touch or iPhone via WiFi.
 
Here is the UI.  The Start/Stop button starts and stops the blinking of the LED on the Arduino board (the LED connected to pin 13).  The two radio buttons, "Blinky" and "Morse Code", control whether the LED blinks in a simple ½-second on/off pattern or the Morse Code for "HELLO WORLD".
Click on the buttons to control this simulated Arduino.
 
 
And here is the Arduino it is controlling:
 More ...
If you have an iPod Touch or iPhone, to see the actual look:
  • Go to the Safari browser on your iPod or iPhone.
  • Enter "svbreakaway.info/tp/myipod.html" on the address bar and click on "Go" (or simply click on that link if you are viewing this page from your iPod/iPhone.. in which case, first note the next step about adding it to your Home Screen).
  • When the Hello World Remote Controller page is displayed, click on this
       button in the middle of Safari's option bar,
    then Add to Home Screen and finally Add.  The page lays out properly when you run it from your Home Screen - ie. without Safari displaying it's address bar. 
    You should see the button (e.g.  ) has been added to one of your Home
    Screen pages.
  • Man-O-War, 25 Oct 2011
    Click on it.  The screen you see on your iPod/iPhone should look similar to the one shown above (the Hello World Remote Controller page) - with the addition of a Link button at the bottom. With it, you can create a link from your real iPod/iPhone to a <-- Click on that button.  In the upper left corner of the window that opens up you should see a Serial No.  Enter that number in the box next to the Link button on your iPod/iPhone.  Then click on the Link button.  You should then be able to control your simulated Arduino from your iPod/iPhone.  Here is a monitor of all the current connections.  When you are connected, you should see your IP address in the table.

If you have an Arduino and either the Asynclabs WiShield 2.0 or Sparkfun WiFly shield (and an iPod Touch, iPhone or any browser that can make a WiFi connection to an ad-hoc device), please download the code and try it out.  Let me know if you have any problems with it ().
 
Some notes regarding the code:
  • A key consideration here is to keep the number of bytes transferred between the iPod/iPhone and Arduino as small as possible.  The WiShield 2.0's packet size is around 130 bytes and we want to pass as few packets as possible.  The app's client/server communication is fairly efficient and straightforward.  When the user clicks on the Start/Stop button or one of the radio buttons, the onClick event calls my push() routine:
        <button type=button id=startButton ... 
            onClick="push('/tp/hwrc/controller','stbu','start')">Start</button>
    
    push() generates a request to the server consisting of just the URL:
        xmlhttp.open( "GET",url,true );
        xmlhttp.send( null );
    
    The URL includes the name of the button and it's value.  E.g.
        /tp/hwrc/controller?stbu=Start
    
    So, it's a small number of bytes passed.  The server parses the URL, changes the state of the blinking settings and returns a small XML response, e.g.
        <state>started</state>
    
    - a fairly small number of bytes but with XML formatting so easy to parse in the browser:
        var state = xmlDoc.getElementsByTagName("state")[0].childNodes[0].nodeValue;
    
    For the Start/Stop button the browser code updates the state of the button, thus confirming that the command was received and executed.  This confirmation step for the radio buttons is something that should be added to the code.
  • The Arduino code that controls the LED uses a timer.  The Arduino function, delay(), would block the webserver from responding - not a good thing.  Using a timer instead was easy and seems to work well.
  •  
    My version of the WiShield webserver code (in the /libraries/WiShield/ folder) has been considerably hacked (by me) for tracing and to instrument the load on the server.  I'll try to clean up the code and document that instrumentation interface at some point. I found setting up the ad-hoc mode in the WiFly shield to be a little quirky.  That setup code starts around line 324 in the sketch if you want to play around with it.
 
 
 
 Camera Controller
Last updated: Melbourne, 9 Feb 2010 Demo (you may need to use your browser's 
  zoom to fit - e.g. View > Zoom > Reset)
Notion:
 
(a) An intervalometer is a device that helps you make a series of exposures with your camera.  For example, you could use it to take 1000 photos, each 15 minutes apart. Then combine them in post processing to make a time-lapse video.  Canon doesn't build an intervalometer feature into their EOS cameras.  I'd like a nice snazzy device that does this.
(b) Also, I'd like to be able to play around with triggerring the shutter on some special event - say a bird being detected in front of the lens.
(c) Finally, I'd like to have remote control over the camera.  With the ability to control all the camera's functions as well as to be able to preview the image and set the focus before the picture is taken.
 
Interesting problems:
  I think what would make this an interesting project is integrating the UI for all these functions on a snazzy device like the iPod.

 More ...
Status:
  Here is a mock-up of the current GUI.  Note: you might need to zoom in or out a little in your browser for the demo pages to fit right.  (e.g. "<command> +" and "<command> -" on Firefox on a Mac, "<CTRL> +" and "<CTRL> -" on Windows, etc).  The buttons along the bottom of these mocked-up pages (as well as the "8.63 volts" drill-down button and the "?" button) are hooked up so please try them.  The other buttons would need to invoke functions on the Arduino so they don't work here.
 
That mock-up should look OK with any web browser, BUT if you have an iPod Touch or iPhone, to see the actual look:
  • Go to the Safari browser on your iPod or iPhone.
  • Enter "svbreakaway.info/tp/cc/r.html" on the address bar and click on "Go" (or simply click on that link if you are viewing this page from your iPod/iPhone.. in which case, first note the next step about adding it to your Home Screen).
  • When the "Remote Control" page is displayed, click on the "box with the arrow pointing out from it" in the middle of Safari's option bar (I guess that icon means "options"), then "Add to Home Screen" and finally on "Add".  When you run it from your Home Screen, the pages layout properly - ie. without Safari displaying it's address bar.
  • You should see the button has been added to one of your Home Screen pages.  Click on it and you should have a fairly realistic demo.  Once again, the buttons along the bottom, the "8.63 volts" drill-down button on the "Admin" page, and the "?" button are hooked up and should work.  If you are on the "Admin" page and click on "Adm" again, it refreshes the page, causing the mock data to be refetched from the GoDaddy server (where svbreakaway.info is being hosted).  The data won't change, but the "Response time" on the "Admin" page is what your browser actually measures (to fetch the data from the server).  I'm seeing between 150 and 250 ms to the GoDaddy server from my iPod here in FL and around 120 ms to my Arduino.
Melbourne, 6 Mar 2010 
Video of my first version:
 
[25 Dec 2011: I'd like to port this code to the new Arduino WiFi shield when it becomes available.]
 
 
 
 Stereo Webcam - Notes
Man-O-War, 16 Dec 2010 Download
Notion:
 
(a) The notion is a typical webcam (with video).  But where a user, via the web, is able to control where the camera is pointing.  And the camera should be portable so that I can move it around.
(b) Stereoptic vision?
(c) 360° field of view?
 
Interesting problems:
 
(a) OK, so one option is to simply interface this Logitech webcam to a microcontroller via a USB connection.  If the microcontroller has a WiFi interface, then it should be (relatively) easy to forward the resultant video stream to a host webserver nearby.  And then stream that to the end-user.  Likewise, the end-user controls where the webcam is pointing by sending commands in the other direction.  Sounds straightforward, and I'll probably implement this first.
(b) 3D televisions are apparently "the next big thing" in consumer electronics.  An interesting variation of this project would be a stereoptic webcam.  How about using 2 Logitech webcams for stereoptic vision, coordinated by the microcontroller, to make a 3D webcam.
(c) One of the things I'd like have is a 360° field of view. 
  • I guess one possibility would be building a "sphere" of webcams so that their images overlap.  Software in the microcontroller would stitch the images together and present a consolidated view to the end-user .. either with perspective "flattened out" or some sort of fish-eye or 360° view.
  • Another possibility might be a 360° panoramic lens, such as (a cheap eBay copy of) one of these from 0-360, EGG, or EyeSee360.  Or this from Olympus if they do come out with it.
  • A third possibility might be making the camera mobile, similar to the Logitech webcam but with a wider range of movement.  A simple webcam could be mounted on an arm that rotates on 2 asis - each axis controlled by a servo.  Or maybe mount the camera on a car in a bubble, with the wheels able to go forward-and-backward as well as side-to-side.
(a-c) (I assume) one of the challenges will be preventing the camera from "looking" directly at the sun.

 More ...
Possible solution:
 
WiFi between the cam unit and Public Webserver is important.  And since I'd like to try Notions (b) or (c) mentioned in the section above, I want as powerful a microcontroller as possible.  Say, the Gumstix Overo Fire.
 
    16 Dec 2010
As a first pass to learn about the USB interfaces to these Logitech Orbit AF webcams and see how the stereo video is going to come out, I wrote an Objective-C program that runs on the MacBook which controls one or two webcams.  The webcam(s) are plugged directly into the USB port(s) of the MacBook (ie. no microcontroller).  If you have one or two Logitech Orbit AF webcams and OS-X (I developed it on 10.5 and then upgraded to 10.6 when it became available so they should work), you can download my code here.  I'm still working on the demo video and some documentation.
 
 
 
 Everything but the kitchen sink
Last updated: Man-O-War, 2 Oct 2012
For my boat monitoring projects this winter, I'd like to interface a bunch of sensors to my microcontrollers.  This blog entry will describe my results from interfacing them to a Raspberry Pi and/or BeagleBone - just to shake things out and maybe try to organize a plug&play approach.  I suspect the RasPi and BeagleBone will be my preferred microcontrollers (rather than the Arduinos that I have used in the past).  The Arduinos are just too limited, IMO.
 
For the final installation, I'd pick an

The stew as of Feb 2012 (not everything is hooked up)
Top: The GPS module (under the USB plug), the Arduino ADK
board and WiFly Shield, the 5V-3.3V level converters.  Bottom:
the various sensors, a Bluetooth module, and the SD card readers.
appropriate microcontroller, PCB breadboard, container, power supply, etc.  Plus, use solder connections instead of the plug-in breadboard.
 
And seeing this installation art makes me want to add some kind of interactive light - maybe to the underside of the hardtop bimini.  Something inspired by bioluminescent plankton that I've seen in the wake.  Or a reflection like this but "reflected off" somebody in the cockpit.  Hmm.
 
 More ...
Here is a list of the components that I either have or have on order:
ComponentFunctionWhere I'd like to use it
Accelerometer
  Triple Axis Accelerometer Breakout - MMA8452Q
  3-Axis Accelerometer Module
Measure acceleration (G's)
in 3 dimensions
Dinghy Black Box
Barometric pressure
  BMP085 Barometric Pressure/Temp/Altitude
      Sensor
Measure barometric pressureWeather Monitor
Camera
  Weatherproof TTL Serial JPEG Camera
      with NTSC Video and IR LEDs
Takes photos and videoBoat Security Monitor
Compass
  Paralax Compass Module HMC5883L
Measure compass headingDinghy Black Box
GPS
  Paralax GPS Module PMB-648 SiRF
  Adafruit Ultimate GPS Breakout - 66 channel
      w/10 Hz updates - Version 3
Lat/longDinghy Black Box
Gyro
  Paralax Gyroscope Module L3G4200D
Measure orientation
in 3 dimensions
Dinghy Black Box
Humidity
  Humidity Sensor - HIH-4030 Breakout
  AM2302 (wired DHT22) temp-humidity sensor
Measure humidityWeather Monitor
Logic Level Converters Convert 5V signal to/from 3.3VmicroSD, SD-MMC cards
Microcontrollers
  1. Arduino ADK, Duemilanove, Mega, Nano USB
      v3

  2. Raspberry Pi
  3. Beagle Bone
MicrocontrollerWell, to prototype this
and then the individual uses
Microphone
  Electret Microphones (4) + amp & board
  Breakout Board for Electret Microphone (1)

Listen to the sound
of the engine
Boat Engine Monitor
Motion
  PIR (Passive Infra-Red) Sensor
Detects motionBoat Security Monitor
Potentiometer
  Continuous Rotation Potentiometer
Create a resistance depending on orientation Dinghy Black Box - for a DIY     wind direction instrument
Proximity
  Hall Effect sensors and some Magnets
The Hall Effect sensor detects presence of the magnet Boat Security Monitor - for an
    intrusion alarm
Dinghy Black Box - for a DIY
    anemometer
Weather Monitor - for a DIY
    anemometer and rain gauge
RS232
  RS232 Shifter SMD
  Droids SAS Serial Adapter RS232-TTL
Interface to RS232 cable Boat Performance Monitor and Weather Monitor - interface to     the Raymarine SeaTalk     multiplexor
SD
  microSD Transflash
  SD-MMC Card
Read/write to SD cardDinghy Black Box
and anywhere I want to do persistent logging
Support
  1. ChronoDot - Ultra-precise Real Time Clock -
      v2.1

  2. Large waterproof OtterBox - 3000
  3. Solar Lithium Ion/Polymer charger - v1.0,
      Large 6V 3.4W Solar panel - 3.4 Watt,
      Lithium Ion Battery Pack - 3.7V 6600mAh
  4. IR sensors - TSOP38238 and Mini
      Remote Control
1. RasPi doesn't include a RTC
2. They claim waterproof to 100'
3. These should all fit in (or mounted on) the Large OtterBox
4. To enable remote control (other than an iPod or Android device, which I plan to do also)
1. Any uses that need an accurate timestamp
2. Any uses that need to be weatherproof or 3. self-powered
Thermometer
  TMP36 - Temperature Sensor
  One Wire Thermometer - DS18B20
Measure temperatureBoat Fridge Monitor and
Weather Monitor
Wind Speed and Direction
  1. My old Datamarine mast-top instrument
  2. My new Raymarine mast-top instrument
Measure wind speed and direction
1. Dinghy Black Box
2. Boat Performance Monitor
    and Weather Monitor
Wireless
  1. Asynclabs WiShield 2.0
      Sparkfun WiFly Shield
  2. Miniature WiFi (802.11b/g/n) Module
  3. Sparkfun Bluetooth Modem - BlueSMiRF Silver
1. WiFi for the Arduinos
2. I've ordered 3 of these - one for each of the two RasPis and one for the BeagleBone
3. I guess this could work on any of the micros
All these Boat Monitors and
Dinghy Black Box
 
 
 
 Dinghy Black Box
Spring Hill, 6 Feb 2012

Essentially the Boat Performance Monitor, but portable and waterproof - that I could use on a sailing dinghy to monitor and improve my performance.
 
 

 Boat Engine Monitor

Monitor engine RPM, temperature, oil pressure, sound, boat speed.  Plot boat speed vs. engine RPM (ideally taking into account effect of wind and waves).
 
 

 Boat Fridge Monitor

Monitor fridge on/off cycles, power usage, fridge/cabin/seawater temperatures.  Possibly modify compressor speed to minimize power usage as described in the Frigoboat section of this Practical Sailor article.
 
 

 Boat Performance Monitor
Last updated: Melbourne, 27 Jan 2010
Notion:
 
My boat instrumentaton (boat speed, direction, & GPS position; wind speed & direction; and water depth) are all available via a network on the boat.
(a) The notion is to be able to display them on an iPod that could be worn in a handy spot (like on one of those iPod arm bands people use when exercising).
(b) Further, I would like to compile a history of the boat speed, under given conditions, to see if I am sailing as good as I have in the past under similar conditions and thus try to improve my skills.
(c) Further, I'd like to compare this with ideal (IMS) speed roses and with similar data that others might gather on their boats.
 
Interesting problems:
  The iPod user interface would make this snazzy, of course.  But what I think would make this real interesting is keeping the history of how the boat has performed in the past on that particular point of sail.  Ie. on that particular heading relative to the wind and that wind speed.  Then we could use that past performance as a benchmark - can we do better?  I think it would be very cool to have a public repository of performance data like this somewhere so that we could see how we compare with other boats - sort of a distributed, time-lagged sailing regatta.

 More ...
Possible solution:
 
Ahh.  The good news is that I probably have all the components I need to put this together - an iPod Touch, a system I can use to host a webserver and integrate the instrumentation data, and finally the instruments themselves.
 
 
 
 Boat Security Monitor
Last updated: Melbourne, 27 Jan 2010
Notion:
 
Keep track of the boat via the web.  This includes sensing the weather, if there is water in the bilge, the battery state, security alarms, a security video webcam, etc.
 
Interesting problems:
  This can probably be done with various off-the-shelf products.  But I'm thinking I can do a better job creating an integrated dashboard specifically for my needs.  We'll see.
 
I'd reuse parts of the Stereo Webcam and Boat Performance Monitor from above.

 More ...
Possible solution:
 
The Arduino would take the readings from the sensors and give them to the server over a USB connection.  The wind conditions (speed & direction) would be from the existing Raymarine network.
 
 
 
 Weather Monitor

Monitor wind speed and direction, barometric pressure, humidity, and rainfall.  Logged and plotted over time.  Available via a website.  Shared with Weather Underground.
 
 

 HTML Grapher - Notes
Last updated: Mt. Desert Island, 3 Aug 2010 Download the code
I found a cool HTML grapher on http://www.aharef.info/static/htmlgraph/.  Kudos to Marcel Salathé and Jeffrey Traer Bernstein.  It creates a graphical depiction of the HTML tags on webpages.  The individual nodes (that is, the inner circles of each of the nodes in the plots) are colored according to the HTML tag (blue for <a>, green for <div>, red for <table>, etc).
 
I added code to allow viewing more than one page.  A sample screen shot is shown below.  Each connected blob represents a different page on my website (/cr34-analysis.php is yellow, /index.php is red, /sv-breakaway.php is blue, /about-me.php is green, etc).  The page you're now reading, /tp.php, is purple.  It looks like spindly creatures under a microscope.  To me anyway.  In the future, I'd like to show the links between pages and add the ability to pick which pages to show.
 
 More ...
Following is a sample screen capture of the results:

 
 
Spring Hill, 5 Feb 2012
I coded this originally to run as a Java applet (using an <applet> tag on an HTML page).  That was using Processing's ability to export a sketch as an applet in a jar file.  Even then, it worked on some versions of some browsers and not on others.  I guess it was laudable that the applet support worked as well as it did.  But, it no longer appears to be working on the current version of Firefox.  Running the script under the Processing IDE (version 1.5.1 as I write this) is working and you can download my code from my download page).  I thought about migrating this code to use Processing.js and the <canvas> tag but it is currently dependent on a parser in Java to parse the HTML.  There is probably something available in Javascript to do this - a possible future to-do.  In any case, I've removed the link to the applet version.  The IDE version, which you can download, is IMO still fun to play with.
 
 
 
 Monitor the battery on my Macbook
Man-O-War, 18 Oct 2010 Initial (rough) pass at the server-side UI
Shortly after I got back to the boat, my laptop started acting a little berzerk when it was recharging.  Even though it isn't that old and I usually run it off the charger (with only about 180 battery recharge cycles so far), at first I thought cripes - the battery is worn out.  It is a little difficult (and costly) to get parts here and I was getting a little anxious.  I googled around and soon found the likely cause - I don't recalibrate the battery often (well.. I think I did it once when I first got this laptop).  But other possible causes mentioned were a bad battery, faulty logic board, problems with the charger, and needing to reset the System Management Controller (SMC).  It wasn't clear at first that the battery was being recharged at all.  But as I watched the power readings in [Apple] > About This Mac > More Info... > Power during that whiney recharge cycle, it did seem to (slowly) get back to a full charge.  I then followed the recommended recalibration process and it's been OK since.
 
 More ...
Anyway, I got to thinking that a battery cycle grapher might be interesting and could maybe help me improve the battery life.  I also saw some Apple(?) documentation suggesting that at the end of the battery life, these power readings get erratic, so this tool may help me determine when it's end of life is approaching (how morbid is that? :-).
 
I wrote a little program that samples the same Power readings as can be viewed in the About This Mac panel I mention above (by programatically invoking
/usr/sbin/system_profiler
once a minute).  I currently simply write the data to a .csv file.  Here is a sample of the results after formatting the .csv data a bit in Excel.
 
 
The vertical axis for the voltage (the purple line) is on the right hand side (0-14V).  The plot starts (on the left) with me unplugging the charger.  When it got to 5 minutes left on the battery (the bottom of the blue "V" in the graph), I plugged the charger back in.  I stopped the plot when the battery indicator (on the MacBook menu bar) was showing "(Charged)".  I was surfing the web and watching a DVD on the laptop while this was going on.
 
Man-O-War, 22 Nov 2010
Here's the current UI.  It displays the readings in realtime (albeit slowly - one sample per minute).  There is also a plot of the daily min/max/ave and a plot of the battery health (current full charge capacity divided by it's capacity when it was new) and the current battery life (how long it takes to discharge from a full charge to empty).  There are a couple screen samples of those plots in my Welcome page for the server side reporter.  
I'd also like to have included the ability to overlay previous recharge cycles.  But I am stopping here.  I've just recently come across a very similar utility called MiniBatteryLogger (although that developer's website has apparently been down for the past couple days.  I am curious to see what he was charging.. hmm.. pun there? :-).  Still, I think this project was a "successful failure" in that I learned a lot and got to try some new stuff (new for me anyway).  For example, my user interface included pinch to zoom in/out on the plots and the 2-finger gesture for scrolling the plot (that sounds a little vulgar, doesn't it?), and 3-finger swipe to move between pages.  I'll hopefully be able to resuse some of the code in future projects.  And I enjoyed working on this one.
 
Again, the first draft of the server side of this (now defunct) app is found here.  The user (running the UI, shown above, on their MacBook) would be able to post reports to the server and display them, along with others, on the Reports folder.  My Aggregate Plots folder was intended to look similar to MiniBatteryLogger's Shared Data Battery Archive Page.  As they say, C'est la vie.
[Update: Currently, storing the sample screens from the user is not working on the server, so those parts are blacked out in the Reports folder.  I need to update the version of the jdbc driver on the GoDaddy server to support writing of the images (as BLOBs) into my host database.  It works on my local development server.]
 
 
 
 Sailboat Characteristics Plotter
Man-O-War, 29 Nov 2010 The tool itself
Notes
This is an interactive web-based plotter of sailboat characteristices (length, beam, etc).  It is based on the excellent database and spreadsheet that I downloaded (22 Nov 2010) from John Holtrop's website, www.johnsboatstuff.com (the download link is towards the bottom of his page).  Mine is still very much a work in progress, but intended to (eventually) be an interactive version of the Excel plots of Crealock 34 and 37 characteristics that I made way waaay back in 2000.  I am totally indebted to John Holtrop for posting his database and spreadsheets. 
 
My interactive plotter can be found here.  Below is a screen sample from it, where I've selected to plot the designs for Alberg, Crealock, and Paine.  You can also select from a list of builders, or from a list of all the models.  The model list should be complete.  The others are still under construction.  Please try it out.  And come back from time to time.  I hope to make improvements.
 
 More ...
    2 Dec 2010
Here are my implementation notes.

    5 Dec 2010
Regarding browser compatability..   I had wanted to develop this app such that the user does not require a Java plug-in (to run an applet), as was the case for the HTML Grappher.  So, I ported the physics library to JavaScript.  The app does require HTML5 and I'm happy to say that most of the HTML5-enabled browsers appear to be playing well with the app.  Here is what I'm seeing to date:
 
The app appears to be working well on Firefox (v 3.6.12), Opera (v 10.63), Safari (v 5.0.3), and Chrome (v 7.0.517 on Vista and 8.0.552 on Mac).  It was not working well on IE8 (v 8.0.6001) - to be expected as IE8 doesn't support HTML5.  On IE 9 (Beta, v 9.0.7930) the stuff does display but there are scroll bars placed around the graphical navigator.  Apparently IE9 doesn't recognize the style="overflow-x: hidden; overflow-y: hidden;" that I'm using on the <frame> element for that section.  I'll try to work around it (or maybe just give IE a little time to catch up).  If you're on Windows, please try one of the other browsers for now.


 
References: Hull Characteristics that affect Seakindliness/Seaworthiness - some interesting discussion

Hmm.  Regarding this project (well, all these projects), I see this as a form of self-expression.  Sort of an intersection of three popular retirees' pursuits: writing, photography, and model building.  I used to enjoy this sort of stuff at work.. a lot.  Towards the end though, that got kind of knocked out of me.  Too many co-workers asking, "Who asked you to do that/say that/think about that?".  Now, I'm kind of enjoying it again.  I need to say though that my manager for the last 7 or 8 years at work was very flexible and a pleasure to work for.  She kept me from burning out until the very end (when she went on to a non-managerial position).
 
 
 
 Traer's physics library to Processing.js - Notes
Last updated: Man-O-War, 10 Feb 2011 Download & samples
For a graphical navigator for my Sailboat Characteristics Plotter I wanted to avoid the user needing to have a Java plugin to run an applet like with the HTML Grapher.  I like the Processing IDE for the Arduino projects and came across Processing.js.  I am using Jeffrey Traer's physics library and have now ported it to Processing.js.  So a sketch using the Traer physics library API should be able to run in any HTML5 capable web browser without a Java plugin.  It will run as JavaScript (supported by Processing.js) and not as an applet.  All the current versions of the browsers (and IE9 Beta) appear to support HTML5 and my testcases all seem to run OK with them (some faster than others, but so far they all have functioned the same).
 
 More ...
Anyway, to port Traer's code, I made the following changes:
 
  Man-O-War, 9 Dec 2010
Downloaded the 3.0 source code from traer.cc/mainsite/physics/ (what is now murderandcreate.com/physics).  Here are the terms from his download page:

 LICENSE - Use this code for whatever you want, just send me a link jeff [at] traer.cc

Renamed class variables so that they are not the same as accessor methods of the class.  The problem is that JavaScript apparently doesn't handle the situation where there is a method called "foo()" and a variable within the class also called "foo".
Commented out the import java.util.* in ParticleSystem and RungeKuttaIntegrator.  I thought I would need to do something with ArrayList, but thankfully that appears to be supported in Processing.js so there was nothing more I needed to do there.
To remove particles and attractions from a ParticleSystem, I tried using the existing removeParticle(Particle) and removeAttraction(Attraction) methods.  They seem to work OK in the IDE, but not in the Processing.js environment.  I didn't spend any time investigating.  Using the removeParticle(int) and removeAttraction(int) versions seem to work OK.  I added a couple methods (marked in the code with "mrn") to make using the "int" versions a little easier.
Man-O-War, 10 Feb 2011
Carl Pearson made some nice improvements to the traer3.pde which I am reposting here as traer3a.pde.  In particular, (paraphrasing him) he
1. fixed a bug in the Euler integrators where they divided by time instead of
    multiplying by it in the update steps,
2. eliminated the Vector3D class - converting the code to use the native
    PVector class,
3. did some code compaction in the RK solver,
4. added a couple nice convenience classes, UniversalAttraction and Pulse,
    simplifying my Pendulums sample considerably.
 
There is a small change to the Traer library API, namely change Vector3D to PVector.
 E.g. say there is a Particle p;
  instead ofp.position() usep.position
instead of p.position().x() usep.position.x
instead of p.position().setX( 123 ) usep.position.x = 123
instead of p.position().set( 4,5,6 ) usep.position = new PVector( 4,5,6 )
instead of p.position().clear() usep.position = new PVector( 0,0,0 )

Here is the result (download the code):
 

dynamics.pde (a simple pendulum sample - Carl Pearson's new version)

traer3a.pde (the Traer physics library, ported to Processing.js - with Carl Pearson's improvements)

widgets.pde (buttons for the pendulum sample)

dynamics.html (a sample html file)

Here is my previous version of the library and pendulums sample code for reference.
Man-O-War, 17 Dec 2010
The pendulums sample displays the frames per second (fps).  Here are some results from that (original Pendulums) sample:
ViewerViewer's Version
OS-X / Windows
Frames per Second
for pendulums sample
OS-X / Windows
F.R. Undefined
(default)
F.R. Limited
frameRate(60)
F.R. Unlimited
frameRate(9999)
Processing IDE1.2.1 / 1.2.1 60 / 60 60 / 60460 / 433
Opera 11.00 / 11.00 221 / 21558 / 58218 / 220
Chrome 8.0.552.231 / 8.0.552.224202 / 22361 / 60201 / 220
Safari 5.0.3 / na 98 / na 60 / na99 / na
IE9 Beta na / 9.0.7930.16406 na / 58 na / 53na / 58
Firefox 3.6.13 / 3.6.13 47 / 36 47 / 3547 / 36
These were all run on a 2009 13" MacBook Pro (2.53 GHz Core 2 Duo w/ 4GB RAM)
    OS-X: Snow Leopard v10.6.4
    Windows: Vista Home Premium SP2 on Bootcamp (sorry, that's all I have)

Or graphically:

I think whether the sample can do 400 fps is a little moot - it should be limited to say, 60 fps (by making a call to frameRate(60) in setup()) to make sure it looks OK across browsers.  But, the questions IMO are:
  1. How well does the performance scale with the complexity of sketches using this library?  That is, in a larger animation (more objects, a larger canvas), does performance become unacceptible on the browsers that are marginal on this (as yet) very simple sample?
  2. Are there easy changes to the application or library that result in good gains? -- the low-hanging fruit.
  3. Why is there such a big spread between browsers?  Will some browsers/platforms do better on graphics applications because they have a better graphics library implementation/hardware assist/underlying support?
  4. Is this wide spread seen in the JavaScript benchmarks?  Can the existing benchmarks help explain it?
Further Reading:
  arborjs.org
 
 
 
 Dromaeo Frontend
Man-O-War, 2 Feb 2011 The tool (v20110112)  /  dromaeo.com
Regarding the question (above),
  1. Is this wide spread seen in the JavaScript benchmarks?  Can the existing benchmarks help explain it?
I took John Resig's Dromaeo benchmark suite (which is pretty sweet, IMO) and added some graphing of the results (using Processing.js, of course).  The tool is here (v20110112).  It is rough yet, but I think shows that there is a wide spread in performance between the browsers.  The plot shows the results from the Dromaeo "Run All JavaScript Tests" suite for the five leading browsers - Chrome, Firefox, IE, Opera, and Safari; run on my system (a 13" MBP - details above).  You can run the same benchmarks on your system by clicking on "Run All JavaScript Tests", then "Run".  Your results should be marked on the plot as black bars.  That result will be apples-and-oranges as we are on different systems - it's just a simple demo.  (I had planned to add the option to let you save your results under your own userid/pw, but have become busy with other projects and am setting this one aside.)
 
 More ...
Applying the results in a more practical way may require profiling the application, doing hot-spot analysis, etc.  So the questions,
  1. Are there easy changes to the application or library that result in good gains?
    -- the low-hanging fruit.
  2. Why is there such a big spread between browsers?  Will some browsers/platforms do better on graphics applications because they have a better graphics library implementation/hardware assist/underlying support?
may take a little while yet.  However, I'd like to next look at
  1. How well does the performance scale with the complexity of sketches using this library?  That is, in a larger animation (more objects, a larger canvas), does performance become unacceptible on the browsers that are marginal on this (as yet) very simple sample?
Man-O-War, 4 Jan 2011
To address that question, I added a graphics benchmark to the tool.  It is based on the pendulums sample
Here is a screen sample
of the benchmark rendering
the pendulums (here, 30 balls
on a 200x200 canvas):
 

And here are some results that I'm seeing on the different browsers:
Config: Chrome 8.0.552, Firefox 3.6.13, IE 9.0.7930, Opera 11.00.1156, Safari 5.0.3.  IE was run on Vista SP2 on Bootcamp, the rest were on OS-X 10.6.5.  All runs were on a 2009 13" MacBook Pro.  Click on the image to enlarge.
 
Each row is a different canvas size - from 50x50 to 400x400.  The columns are browsers - labeled along the top.  On each graph, the y-axis is fps (ave. over a 3 sec run), the x-axis is #balls, the red line is fps without actually drawing the balls and lines, the blue line is fps drawing the graphics (using Processing.js), and the green area is the difference between them (the cost, in fps, of the circle and line drawing).
 
The results from Chrome are interesting - the way it appears to degrade more dramatically than the other browsers as #balls increase when the graphics are rendered.  It still ends up well above the others in all cases, and still comfortably over 60 fps.
 
    Man-O-War, 5,6 Jan 2011  
 
 
At first I thought this looked like an opportunity for improvement in Chrome's performance.  Then I realized that there was an illusion.  I converted the data to estimated ms of CPU time* to render each frame and have plotted that here.  Chrome appears to be one of the best with it's rendering code.
 
To the right is a plot of my estimated CPU times* for everything else (calculating the positions of the balls, calculating and displaying the fps, and the draw() loop overhead - just not rendering the balls and lines).  For each browser, they are pretty consistent across the canvas sizes (a sanity check on my estimates).  Here I ran each testcase for 10 seconds (rather than 3).  Opera showed the
problem reported in the next paragraph in the first series (50x50), so their times here may be higher than they would be without that problem.
 
*CPU time of 1 core of the 2.53 GHz dual core processor on my 2009 13" MacBook Pro.

The Opera results are also interesting - that is, the degradation at 100x100/10 balls/with graphics rendered (the fork on the left of that graph).  And then at
200x200, the fps are less than half of what I see running the original pendulums sample!  To see if there was a cumulative effect from running the 50x50 and 100x100 testcases first, I tried the benchmark again, this time starting at 200x200 (results shown in the figure to the right) and sure enough - the fps for 10 objects were well over 200 for the first testcase (200x200) and similarily degrade on the second (400x400).  Maybe some resource is getting allocated during the first two testcases and not being returned efficiently?  I think this is kind of interesting and I'm going to keep the benchmark as is (rather than try to more pristinely reset the state between canvas sizes).  It's something that doesn't appear to be showing up with the other browsers and is probably as real-world as anything.  I will look at my storage allocation more closely though to see if it's something basic in the app.    

Opera makes a stronger showing than Chrome on the trivial end (small # objects/small canvas).  Fixing their resource allocation problem (assuming it's not something basic that I'm not releasing in my code) will probably help in more complex animations.  But, without further improvements to their code, on the more challenging end (large # objects) it will probably still be surpassed by Chrome's performance.
 
Man-O-War, 8 Jan 2011
I added an SVG variant to the Pendulums benchmark.  Here are some results that I'm seeing:
Config: Chrome 8.0.552, Firefox 3.6.13, IE 9.0.7930, Opera 11.00.1156, Safari 5.0.3.  IE was run on Vista SP2 on Bootcamp, the rest were on OS-X 10.6.5.  All runs were on a 2009 13" MacBook Pro.  Click on the image to enlarge.
 
Each row is a different canvas size - from 50x50 to 400x400.  The columns are browsers - labeled along the top.  On each graph, the y-axis is fps (ave. over a 10 sec run), the x-axis is #balls, the red line is fps without actually drawing the balls and lines, the green line is fps drawing the graphics using Processing.js, and the blue line is fps drawing the graphics using Raphael/SVG.  The green and blue areas show the difference between not rendering and rendering using Processing.js and Raphael/SVG, respectively.
 
    Man-O-War, 9 Jan 2011
 
Here are my estimates of ms of CPU time* to render each frame using Processing.js.  (In case you try running the benchmark yourself) FYI, during the benchmark run, the balls and lines are drawn on the green Processing.js canvas.
 
 
 
Here are my estimates of ms of CPU time* to render each frame using Raphael/SVG.  The benchmark uses Processing.js to calculate the positions of the balls (using the Traer physics library) and calculate and report the fps on the Processing.js canvas (which is gray during this part of the benchmark).  But when it comes time to
drawing the balls and lines, the code calls a function that invokes the Raphael circle() and path() functions which I believe are SVG-based.  FYI, during the benchmark run, the balls and lines are drawn on the blue, div-based, Raphael object.  The now-gray Processing.js canvas continues to report the fps during the run.
 
Here are my estimated CPU times* for everything else (calculating the positions of the balls, calculating and displaying the fps, and the draw() loop overhead - just not rendering the balls and lines).  They are pretty consistent across the canvas sizes (a sanity check on my estimates).  Each testcase is run for 10 seconds (rather than 3) so again, Opera showed the problem (reported  
above) in the first series (50x50) and their times here may be higher than they would be without that problem.  FYI, during the benchmark run, the Processing.js canvas is grayed out, with the fps reported during the run.
 
*CPU time of 1 core of the 2.53 GHz dual core processor on my 2009 13" MacBook Pro.

Man-O-War, 12,13 Jan 2011
I added a version of the SVG testcase that reuses the Raphael circle and line objects, rather than creating new ones for each frame.  It is shown by the dashed blue line on these graphs:
Config: Chrome 8.0.552, **Firefox 3.6.13, ***Firefox 4.0b8, IE 9.0.7930, Opera 11.00.1156, Safari 5.0.3.  IE was run on Vista SP2 on Bootcamp, the rest were on OS-X 10.6.5.  All runs were on a 2009 13" MacBook Pro.  Click on the image to enlarge.
 
Each row is a different canvas size - from 50x50 to 400x400.  The columns are browsers - labeled along the top.  On each graph, the y-axis is fps (ave. over a 10 sec run), the x-axis is #balls, the red line is fps without actually drawing the balls and lines, the green line is fps drawing the graphics using Processing.js.  The solid blue line is fps drawing the graphics using Raphael/SVG creating new circle and line objects for each frame.  The dashed blue line is reusing the objects from frame to frame - just changing their locations using Raphael's attr method on the objects.  The green and blue areas show the difference between not rendering and rendering using Processing.js and Raphael/SVG, respectively.  Please try it yourself.
 
The new and improved Raphael/SVG testcase (dashed blue line) generally performed better than my initial version.  My initial version simply replaced the calls to Processing's ellipse() and line() with calls to Raphael's circle() and path().  The improved version allocates arrays of circle and path objects when the benchmark starts (before measuring fps starts) and then reuses them over the course of the benchmark, using Raphael's attr() method to change the locations.  The odd exception is that on Safari and the 400x400 canvas on Opera, the performance got worse for the 10-ball case.
 
The other surprise was that with IE9 the benchmark stops during the 100x100 / 20 ball run.  This occurred in the new testcase and happened on each of the 3 times I tried to run the benchmark.  Ugh.
 
Opera showed the problem (reported above) in the first series (50x50).  So it's results on the larger canvas sizes are probably degraded.
 
Man-O-War, 14 Jan 2011
Per a suggestion by timeu (post #11 in that thread), I ran the benchmark out to 400 balls (that's 1200 objects rendered - each pendulum is drawn as 2 circles and a line).  I also changed the plots to show the Raphael/SVG testcase that reuses objects as the solid blue line (and blue area) and my original version that creates a new set of objects for each frame as the dashed blue line.  Here are the current results:
Config: Chrome 8.0.552, Firefox 3.6.13, Firefox 4.0b8, Firefox 4.0b9, IE 9.0.7930, Opera 11.00.1156, Safari 5.0.3.  IE was run on Vista SP2 on Bootcamp, the rest were on OS-X 10.6.5.  All runs were on a 2009 13" MacBook Pro.  Click on the image to enlarge.
 
Each row is a different canvas size - from 50x50 to 400x400.  The columns are browsers - labeled along the top.  On each graph, the y-axis is fps (ave. over a 10 sec run), the x-axis is #balls, the red line is fps without actually drawing the balls and lines, the green line is fps drawing the graphics using Processing.js.  The dashed blue line is fps drawing the graphics using Raphael/SVG creating new circle and line objects for each frame.  The solid blue line is reusing the objects from frame to frame - just changing their locations using Raphael's attr method on the objects.  The green and blue areas show the difference between not rendering and rendering using Processing.js and Raphael/SVG, respectively.
 
On IE9, the benchmark stops during the 100x100 / 20 ball run.  Opera showed the problem (reported above) in the first series (50x50).  So it's results on the larger canvas sizes are probably degraded.
 
Here is a plot of the estimated CPU time (ms) to render each object (ie. the average time in ms for all the circles and lines on each frame refresh).  
 
Here is what I think is driving the performance in this benchmark.  Again, this is a plot of estimated ms of CPU time per object - but here the object is not actually rendered.  Let's look at one of the datapoints - say, the Firefox 3 / 400-ball datapoint in the upper right corner of the graph - measured at ~10ms for 400 balls:
        10ms x 400 balls x 3 objects per ball = ~12 seconds
to calculate the new positions for each frame.  The measurement for that datapoint was:
        11.8 sec for 1 frame (again, without rendering).
The corresponding measurements where the frame *was* rendered were:
        11.8 sec/frame with rendering using Processing.js and
        12.3 sec/frame with rendering using Raphael/SVG.
As the # balls increases, more and more of the CPU time is spent calculating the new positions per object. 

The benchmark is pretty simple.  There are from 10 to 400 pendulums ("# balls").  Each is represented by a fixed point or particle and a free particle, as well as a spring between them.  There is an attraction force (set to a negative value so they are repulsions) between all the free particles.  For each frame refresh (tick of the physics system), the attraction forces are calculated between each particle and all the others.  So it is an n² problem.  The spring calculation and rendering are probably linear with n.  So, I'd guess the complexity to be:
     (n x n-1)/2 (forces) + n (springs) + n (rendering) = n²/2 - n/2 + 2n => n² + n
 
Here is a run on Firefox 3, after commenting out the attraction forces code.  Much better performance at the higher number of objects.  Compared with the times mentioned above for the 400-ball testcase, the time to calculate the positions for each frame goes down from 11.8 sec to 0.20 sec, and the times with rendering go from 11.8 sec 0.27 sec for Processing.js and from 12.3 sec to 0.48 for Raphael/SVG.  Note that the x-axis is plotted on a log scale, so there is still something non-linear.
 
But with the forces removed, the animation is *very* boring.  Those forces are a vital part of it.
 
 

Man-O-War, 21 Jan 2011
To recap, the benchmark I used above (which was based on the pendulums sample) was dominated by the calculations of the physics model - with a large non-linear component being figuring the attraction forces between the balls swinging on the pendulums.
 
To take a closer look at just the graphics performance, I made up a benchmark that just draws shapes - circles, rectangles, lines, and a short text string - in a simple loop.  To the right is a screen sample.  The benchmark varies from 4 to 4000 of these shapes (the datapoint of "4 shapes" renders one of each of these shapes, the datapoint for "4000 shapes" renders 1000 of each).  The shapes are positioned and colored randomly from frame to frame.  A table of random numbers is created at startup to avoid generating random numbers during the measurement.  As before, the canvas size ranges from 50x50 to 400x400. 
 
Screen sample from "Just Shapes" benchmark

Here are the results from the different browsers:
Config: Chrome 8.0.552, Firefox 3.6.13, Firefox 4.0b9, IE 9.0.7930, Opera 11.00.1156, Safari 5.0.3.  IE was run on Vista SP2 on Bootcamp, the rest were on OS-X 10.6.5.  All runs were on a 2009 13" MacBook Pro.  Click on the image to enlarge.
 
Each row is a different canvas size - from 50x50 to 400x400.  The columns are browsers - labeled along the top.  On each graph, the y-axis is fps (ave. over a 10 sec run), the x-axis is # shapes, the red line is fps without actually drawing the shapes, the green line is fps drawing them using Processing.js.  The dashed blue line is fps drawing the graphics using Raphael/SVG creating new circle, line, rectangle and text objects for each frame.  The solid blue line is reusing the objects from frame to frame - just changing their locations and color using Raphael's attr method on the objects.  The green and blue areas show the difference between not rendering and rendering using Processing.js and Raphael/SVG, respectively.
 
Opera showed the problem (reported above) in the 500 shape testcase of the first series (50x50 canvas).  So it's results on the larger canvas sizes are probably degraded.  I have not included it in the plots below.
 
Here is a plot of the estimated CPU time (ms) to render each shape (ie. the average time in ms for all the circles, lines, rectangles, and text on each frame refresh).  The red lines are Raphael/SVG and the green lines are Processing.js.  The data is from the 200x200 canvas size runs.
 
These times are calculated PER shape rendered.  So a result that is linear with the number of shapes
 
 
would be a flat, horizontal line.  As you can see, the Processing.js rendering time is quite a bit less than the Raphael/SVG rendering time and is linear with the number of shapes rendered (more easily seen in the next graph).  The IE9 SVG curve (it's the upper red one) appears to be in a class by itself :-).  There was a smart old engineer in Poughkeepsie who I had the privilege to work with who used to say to us, "Problems are easy - find 'em and fix 'em".  I assume Microsoft will do that here.
 
What's interesting are the SVG curves for Firefox - both version 3 and Beta 4.  They seem to get more efficient as the number of shapes increases!
 
Here is the same graph, with the y-scale expanded to show the Firefox and Processing.js results a little better.  
 
Here is the estimated CPU time per frame for just the benchmark loop - without rendering any shapes.  They are flat, as expected.

Here (v20110121) is this benchmark (called, "Just Shapes").  To run it, "Misc Graphics Tests" > "Run".  There will be a log of the datapoints produced as well (in the split screen that should show up - copy & paste it into a .csv file if you want to import into a spreadsheet)  The format of the data is as follows:
    <m>,<n>,<# shapes>,<fps>
        where <m> = canvas size; <n> = 0:wo/ rendering, 1:Processing.js, 2:Raphael/SVG V0(creates         objects each frame), 3:Raphael/SVG (reuses objects)
Each datapoint is 2 seconds.
 
Man-O-War, 24 Jan 2011
Regarding Firefox appearing to get more efficient as #shapes increased, I wonder if they are doing some sort of clipping or z-buffer elimination of shapes that are say completely covered (and thus don't need to be rendered at all).  To look at this, I made up a benchmark that draws 6x6 squares on a 400x400 canvas, with none of the squares overlapping.  The result was a little disappointing because the behavior of the other browsers changed - IE9 even turned in the best SVG performance!  I suspect I took too big a step away from the case I was examining - pertubating things a bit too much.  Below is a screen sample from a run and the curves:
 

Screen sample from "Tiny Squares" benchmark

Everybody looks fairly flat in this range, with Firefox 3 SVG and Safari SVG growing a little faster than the rest.

This is the plot of the estimated CPU time for just the benchmark loop (without any rendering).  They are flat, as expected.

OK.  I went back and made up a benchmark that is closer to the original "Just Shapes" benchmark.  This time, I draw 6x6 rectangles, a 6-pixel diameter circle, an 8-pixel long diagonal line, and the letter "h" - again, none of them overlapping.  The results are below.  The IE9, Chrome and Safari SVG curves look very similar to the previous "Just Shapes" results of the 21st.  And this time the Firefox SVG curves show modest growth as the #shapes increases.  So, I suspect Firefox is doing some sort of clipping/z-buffering in it's rendering.  Note that their SVG curves are again better than any of the other browsers.  Kudos to their SVG rendering implementation.
 

Screen sample from "Tiny Shapes" benchmark

Ahh.  This is more like it :-).  The curves are similar to the ones from the "Just Shapes" benchmark, except we see that the Firefox curves are sloping upward very gradually.
 

Here is the same plot with the y-scale expanded to show the Firefox curves a little better.

And here is the same plot again with the y-scale expanded more to show the Processing.js curves a little better.

The plot of the estimated CPU time for just the benchmark loop (without any rendering).  They are flat, as expected.

Here (v20110123) is the "Tiny Squares" benchmark and here (v20110125) is the "Tiny Shapes" benchmark.  To run either, "Misc Graphics Tests" > "Run".  There will be a log of the datapoints produced as well (in the split screen that should show up - copy & paste it into a .csv file if you want to import into a spreadsheet)  The format of the data is as follows:
    <m>,<n>,<# shapes>,<fps>
      where <m> = canvas size;
            <n> = 0:wo/ rendering,
              1:Processing.js,
              2:Raphael/SVG V0(creates
              3:Raphael/SVG (reuses objects)
            Each datapoint is 2 seconds.
Man-O-War, 27 Jan 2011
So, IE9 appeared to do well in the SVG results from Tiny Squares (non-overlapping squares), but was a disaster in Tiny Shapes (non-overlapping circles, squares, lines, and text).  What gives?  I made up a benchmark that separately runs through each of the 4 shapes included in Tiny Shapes.
 
Here is the fps plot from the run.  The red line is without rendering, the blue line is equal numbers of non-overlapping circles, lines, squares, and the character "h" (reproducing Tiny Shapes - see screen sample above).  Then there is a series of  
datapoints for each of just circles, just lines, just squares, and just the letter "h".
 
They are shown by the dashed gray lines with the "C" (for Circles), "L" (for Lines), "R" (for Rectangles - ie. the squares), and "T" (for Text) to the left of the lines.  In my run, "C", "L", and "R" were pretty much identical (with the C, L, and R jammed together in one blob at the top of the y-scale).  Ah ha!  "T" (Text) appears to be the culprit.  It cut the fps in half on the 4 shapes/frame datapoint and continued the trend throughout the range.  Hopefully, by the time you read this, Microsoft has found and fixed this problem.  It doesn't seem like rocket science :-).
 
Here is a plot of estimated CPU time (ms) per shape rendered.  The times for Circles, Lines, and Rectangles are very small (and flat on this scale).  Again, Text appears to be not very efficient.  "All" corresponds to the solid blue line in the previous plot - equal numbers of circles, lines, rectangles, and text.  "Ave of 4 components" is calculated in the spreadsheet, ie. ( Circles + Lines + Rectangles + Text )/4.  The "All" curve (a measurement) and the "Ave of 4 components" curve (calculated from  
the 4 separate measurements) look pretty close - a good sanity check on the measurement.  "wo/ rendering" is the benchmark loop without doing any rendering.  It is flat, as expected.

Here (v20110126) is this benchmark (called, "Tiny Shapes 2").  To run it, "Misc Graphics Tests" > "Run".  There will be a log of the datapoints produced as well (in the split screen that should show up - copy & paste it into a .csv file if you want to import into a spreadsheet)  The format of the data is as follows:
    <n>,<# shapes>,<fps>
    where <n> = 0:wo/ rendering, 3:All, 4:Circles, 5:Lines, 6:Rectangles, 7:Text
Each datapoint is 2 seconds.  There may be points where IE9 blanks out or appears to freeze, but it *should* finish - just be patient.  On Safari, you may need to disable the JavaScript long-running timeout ( Develop > Disable Runaway JavaScript Timer ).  I did with the more demanding version of this (10 second datapoints, up to 4000 shapes).  On the other browsers, I had no problems running the benchmark.
 
Please send any comments to or post to this Processing.js forum entry.  If you try the benchmark and see much different results - say, that IE9 performs OK on "Tiny Shapes" or that Text is not far off from Circle, Line and Rectangles in "Tiny Shapes 2", I'd like to hear about it along with something about your configuration (OS, hardware, version of IE9).  All the other browsers showed slightly degraded results for Text compared with the other shapes in "Tiny Shapes 2", but nothing as dramatic as shown by IE9.  Maybe it's my system.
 
Updates:
  12 Jan 2011   1. Added a version of the SVG testcase that reuses the Raphael circle and line objects (rather than creating new ones) for each frame refresh.
8 Jan 2011 1. Added an SVG testcase with comparison to rendering in Processing.js.
3 Jan 2011 1. Added plots of the pendulums results.
2 Jan 2011 1. Added an initial graphics benchmark - the pendulums (traer3.pde) sample.
2. Added the browser versions numbers to the output.
26 Dec 2010 Initial version

Still to-do:
  • Display testcase source code when user clicks on the testcase name
  • Store results for a whole series using some sort of userid/password
  • Look at profiling or "finger printing" a benchmark
  • Look at adding hot-spot analysis
  • Create a .csv format output of the results for use in a spreadsheet
 
 
 
 Man-O-War Heritage Museum Website
Man-O-War, 15 Mar 2011
This is a place to put my notes regarding the Man-O-War Heritage Museum website that we've put together.  I think I'd like to develop some interactive family trees.  Then it would be nice to mash that up with the historical context.
 

 
 My Mac Notes
Saylorsburg, 12 Nov 2012
Got a new 13" Retina MackBook Pro.  Fast, thin, light, nice display, shiney.  Only thing is that the keyboard feels odd.  Maybe just a matter of time to break it in?
-- End-of-file --