Tagged: code code RSS Toggle Comment Threads | Keyboard Shortcuts

  • shag 11:21 am on July 20, 2010 Permalink | Log in to leave a Comment
    Tags: , code code, ,   

    Hardware volume key support on XFCE 

    XFCE4 from Debian unstable (“sid”) does not seem to bind the hardware volume-up, volume-down, and mute keyboard buttons on my ThinkPad to any actions.¬† It’s¬†convenient to be able to control the sound card with these dedicated buttons – no twiddling around with the mouse to reach for a volume slider when NWA hits your playlist and the boss walks over.¬† I suppose this is just a rough edge with the current XFCE integration; Gnome seems to handle this pretty well out of the box, but then again, Gnome is a bloated pig compared to XFCE. So here’s one way to make the buttons work in XFCE.

    First, you’ll need to make sure some prerequisite packages are installed: the package that provides the “amixer” binary, and some variant of awk. Open an XFCE terminal window by clicking on the XFCE “mouse” logo in the XFCE panel, then clicking “Terminal”. Become root by running the following and entering your root password:

    su -

    Then install the appropriate packages — here’s how to do it in Debian:

    apt-get install alsa-utils mawk

    Once the packages are installed, drop root privileges:


    Then let’s pick a place for our scripts to live. This example will assume that they will go into a ‘bin’ directory underneath your home directory; so, let’s make sure that exists:

    mkdir ~/bin

    Now, on to the scripts. Copy and paste the following script as “~/bin/alsa-toggle-mute”. Probably the easiest way to do this with a stock Debian system is to start the ‘nano’ editor with:

    nano ~/bin/alsa-toggle-mute

    highlighting the script below, and then using your middle mouse button to paste the script into the nano terminal window, then using “Ctrl-x y ENTER” to save the file:

    # alsa-toggle-mute
    # AMIXER: path to your 'amixer' binary
    # CONTROL: sound card control to adjust: run 'amixer' to find this
    # LEVELKEY: pattern on 'amixer' output lines to search for that provides
    # the current control level
    $AMIXER sset $CONTROL \
    `$AMIXER sget $CONTROL |
    awk "(/$LEVELKEY/) { if (\\$NF == \\"[on]\\") { print \\"mute\\" } else { print \\"unmute\\" } }"` > /dev/null

    Make the script executable:

    chmod 700 ~/bin/alsa-toggle-mute

    Then attach it to the hardware mute button with the XFCE keyboard settings manager GUI. Click on the XFCE “mouse” logo that should be on the edge of your XFCE panel; click “Settings” from the pop-up menu; then click “Keyboard”. Click on the “Application Shortcuts” tab. Then click “Add”. XFCE will ask for a path to the command; enter the full path to your alsa-toggle-mute script, e.g., “/home/username/bin/alsa-toggle-mute” — of course, you will have to substitute your username in that command. XFCE will then pop up a box waiting for the “Command Shortcut” — at this point, press your hardware mute button.

    That should be it! Your mute button should now work.

    Now let’s add support for the volume-up button. Save this script to “~/bin/alsa-volume-up”:

    # alsa-volume-up
    # OFFSET: amount to increment/decrement the control level: e.g., -1
    # decrements the volume, 1 increments the volume; values with
    # a greater absolute magnitude will decrement or increment the
    # control level more quickly. Use "$1" if you wish to pass
    # the offset in from command line parameters
    # AMIXER: path to your 'amixer' binary
    # CONTROL: sound control to adjust: run 'amixer' to find this
    # LEVELKEY: pattern on 'amixer' output lines to search for that provides
    # the current control level
    # LEVELFIELD: awk field number of the current control level
    $AMIXER sset $CONTROL \
    `$AMIXER sget $CONTROL | \
    awk "(/$LEVELKEY/) { print \\$$LEVELFIELD + $OFFSET }"` > /dev/null

    Make it executable:

    chmod 700 ~/bin/alsa-volume-up

    Attach it to the hardware volume up button by following the same instructions as for the mute script (above), but in place of “/home/username/bin/alsa-toggle-mute”, use “/home/username/bin/alsa-volume-up”. And rather than pressing the hardware mute button, press the hardware volume up button.

    Then let’s handle the hardware volume down button. This time we will take a shortcut. Rather than copying and pasting another script, we’ll just copy and edit the volume-up script that you’ve already saved:

    cp ~/bin/alsa-volume-up ~/bin/alsa-volume-down

    Then edit the ~/bin/alsa-volume-down script in nano to change the line that reads “OFFSET=1″ to “OFFSET=-1″:

    nano ~/bin/alsa-volume-down

    .. edit the script, then use “CTRL-x y ENTER” to save.

    Then, as before, attach the script to the hardware volume down button by following the same instructions as for the mute script (above), but in place of “/home/username/bin/alsa-toggle-mute”, use “/home/username/bin/alsa-volume-down”. And rather than pressing the hardware mute button, press the hardware volume down button.

  • rajbot 11:33 am on January 24, 2010 Permalink | Log in to leave a Comment
    Tags: , api, , , code code,   

    Signing Amazon Web Services API Requests in Python 

    I wanted to ping the “Amazon Product Advertising API” which now requires an HMAC signature, and the pyAWS library doesn’t sign requests and is no longer maintained. Here is some Python code to create a signed request:

    # pyAWS no longer works with the AWS signed request requirement
    # Sign an AWS REST request using the method described here
    # http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?RequestAuthenticationArticle.html
    def getSignedUrl(accessKey, secretKey, params):
        #Step 0: add accessKey, Service, Timestamp, and Version to params
        params['AWSAccessKeyId'] = accessKey
        params['Service']        = 'AWSECommerceService'
        #Amazon adds hundredths of a second to the timestamp (always .000), so we do too.
        #(see http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html)
        params['Timestamp']      = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime())
        params['Version']        = '2009-03-31'
        #Step 1a: sort params
        paramsList = params.items()
        #Step 1b-d: create canonicalizedQueryString
        # This code comes from http://blog.umlungu.co.uk/blog/2009/jul/12/pyaws-adding-request-authentication/
        # and the resulting discussion
        canonicalizedQueryString = '&'.join(['%s=%s' % (k,urllib.quote(str(v))) for (k,v) in paramsList if v])
        #Step 2: create string to sign
        host          = 'ecs.amazonaws.com'
        requestUri    = '/onca/xml'
        stringToSign  = 'GET\n'
        stringToSign += host +'\n'
        stringToSign += requestUri+'\n'
        stringToSign += canonicalizedQueryString.encode('utf-8')
        #Step 3: create HMAC
        digest = hmac.new(secretKey, stringToSign, hashlib.sha256).digest()
        #Step 4: base64 the hmac
        sig = base64.b64encode(digest)
        #Step 5: append signature to query
        url  = 'http://' + host + requestUri + '?'
        url += canonicalizedQueryString + "&Signature=" + urllib.quote(sig)
        return url
  • rajbot 6:43 pm on January 20, 2010 Permalink | Log in to leave a Comment
    Tags: , code code   

    Fast manipulation of tar files using 7zip 

    tar always reads every byte in an archive (never calles seek()) and is very slow when trying to extract a single file from a large archive.

    One solution is to use 7z instead, which is found in the p7zip-full debian package. For some operations, 7z is three orders of magnitude faster than tar. Here are some timings that illustrate how much faster 7z is:

    #4.5GB archive listing using tar
    time tar tvf fifteenthcensus00reel2149_jp2.tar 
    (output suppressed, timing stable whether file cache is warmed or not)
    real	2m1.876s
    user	0m0.444s
    sys	0m7.740s
    #4.5GB archive listing using 7z with cold file cache
    time 7z l fifteenthcensus00reel2149_jp2.tar 
    (output suppressed)
    real	0m10.419s
    user	0m0.080s
    sys	0m0.124s
    5:28 PM
    #4.5GB archive listing using 7z with hot file cache
    time 7z l fifteenthcensus00reel2149_jp2.tar 
    (output suppressed)
    real	0m0.145s
    user	0m0.052s
    sys	0m0.040s
    #extraction of last file in a 4.5GB archive using tar
    time tar xvf fifteenthcensus00reel2149_jp2.tar fifteenthcensus00reel2149_jp2/fifteenthcensus00reel2149_0185.jp2
    real	2m3.545s
    user	0m0.436s
    sys	0m7.824s
    #extraction of last file in a 4.5GB archive using 7z and a hot file cache
    time 7z e fifteenthcensus00reel2149_jp2.tar fifteenthcensus00reel2149_jp2/fifteenthcensus00reel2149_0185.jp2
    real	0m0.104s
    user	0m0.036s
    sys	0m0.036s
    • shag 1:51 pm on February 8, 2010 Permalink | Log in to Reply

      maybe you can file a bug with john gilmore the next time you see him :-)

  • peliom 12:25 pm on November 10, 2009 Permalink | Log in to leave a Comment
    Tags: , code code, , ,   

    Zaggle Gets You There with Personalized Event Newsletter 

    Over at the Zaggle Togetherness Blog we have a very important announcement :-)

    Our new personalized email newsletter aggregates all the events your friends are attending into one simple email digest. You can set the email frequency anywhere from once per day to once per week. Here is a short example of what the email might look like, most emails will have more events than this.

    Example of Personalized Zaggle Events Newsletter

    If you think Zaggle is useful, please help spread the word by posting this link on Facebook, Twitter or anywhere else with lots of friends: http://bit.ly/zeemail

  • rajbot 12:59 pm on July 29, 2009 Permalink | Log in to leave a Comment
    Tags: code code,   

    How To Pretty-Print a Python ElementTree Structure 

    ElementTree doesn’t support pretty-printing XML. lxml does, but isn’t installed on our system. minidom‘s toprettyxml() is seriously fucked up. What to do? Turned out PyXML was installed, so I took some advice from here and came up with this function, which takes an ET node and returns a pretty-printed string:

    import xml.etree.ElementTree as ET
    from xml.dom.ext.reader import Sax2
    from xml.dom.ext import PrettyPrint
    from StringIO import StringIO
    def prettyPrintET(etNode):
        reader = Sax2.Reader()
        docNode = reader.fromString(ET.tostring(etNode))
        tmpStream = StringIO()
        PrettyPrint(docNode, stream=tmpStream)
        return tmpStream.getvalue()
  • may 10:04 am on July 15, 2009 Permalink | Log in to leave a Comment
    Tags: binary, code code, einstein, spreadshirt,   



    I made myself a tshirt a couple weeks ago on Spreadshirt. They’re awesome! A lot better than any other print-to-order service I’ve tried like Cafe Press and Zazzle. The shirt is based off of this one that I saw a friend wearing. I didn’t like what it said though so mine says something different :-) You can’t see the whole phrase in the photo above but you can see it below (in a different font).


    • rajbot 10:40 am on July 15, 2009 Permalink | Log in to Reply

      These are great! Do they come in other colors? :)

    • may 11:30 am on July 15, 2009 Permalink | Log in to Reply

      Oh yes! I’ve put it up over here if anyone wants one of their own :-) http://filterfine.spreadshirt.com/ Once you select a shirt there are a couple color options – I think about 5 or 6 colors for men’s shirts. There are also other kinds of shirts that the pattern can be printed on http://www.spreadshirt.com/us/US/Create-t-shirt/Create-your-own-59/

    • rajbot 11:29 am on July 27, 2009 Permalink | Log in to Reply

      With Mike’s help, I made my first silkscreen this weekend! I used the TR header image, and it almost worked! The first three letters were overexposed, but the ‘robot’ part looked great! Will try again soon.

      Silkscreening in the age of spreadshirt and cafe press is *ridiculous*, but that’s probably why I want to learn how to do it..

    • may 12:03 pm on July 27, 2009 Permalink | Log in to Reply

      oooh I want to see!!! a friend of mine had a silkscreening birthday party a couple years ago because he had a thermofax machine and we all made tshirts. it was fun!

    • Q 11:38 am on July 31, 2009 Permalink | Log in to Reply

      raj you gotta show us the shirt, and I’d like to put in an order for one! I did a little silkscreening when I lived in Santa Barbara. My first design, by the way, was a Billy Idol album cover halftone. It took a few tries to figure out the exposure, but I finally got it. I made one for my nephew (he was about 7 at the time) and Billy Idol autographed it at the concert in St. Louis.

      I recently discovered these handy-looking things. Can anyone comment? It looks like they’re taking the goopy and drying parts out of the process and taking you straight to the exposure stage. I like that.

    • missy 7:54 pm on August 22, 2009 Permalink | Log in to Reply

      Raj, film in the age of digital is totally ridiculous but it’s fun, which is why I’m addicted. I totally understand.

      Oh, and I love the shirt!

  • peliom 4:07 am on July 8, 2009 Permalink | Log in to leave a Comment
    Tags: code code, ,   

    Zaggle: A Social Calendar For You and Your Friends 


    Well, here it is, the first rough cut of a Facebook Connect website that gathers all your Facebook events into one handy place. Check it out! and let us know what you think …
    Zaggle Preview Screenshot

  • tracey pooh 11:36 pm on May 24, 2009 Permalink | Log in to leave a Comment
    Tags: code code, ddAdd new tag, , word   

    fotobook widget and plugin for wordpress 

    Make your facebook photos magically appear on your wordpress site

    OK I’m certainly not all that and a bag of chips, but set to roll out in one week for a 565 mile bike trip to LA (over 7 days — I’m not *that* crazy 8-) I wanted to see if there was a nice/cute way to make me:

    • take pictures with my ifone
    • use the facebook app (to either take a picture or select one from ifone)
    • upload to facebook
    • have it automatically show up on my site

    the first 3 are easy-peasy.  you throw money at the problem and get the ifone with its legendarily crappy cell service and the free facebook app.

    for the last point, i searched around wordpress plugins and found “fotobook” plugin (which also comes with two default disabled widgets).

    it’s a liddle scary — you login to facebook while it’s “bugging” you, but once you done that (and given it your car keys) it figures out all your photo albums. ¬† you can pick which albums you want to show/hide. ¬†¬†you can login/auth more accounts/people too (but so far I only love myself). ¬†to make my left-hand column “NEW TRACEY PIX” on

    my wordpress site

    show up, I enabled the “photos widget”, set it to “most recent”, and tweaked the underlying PHP code to make a simple link to my facebook “mobile uploads” public page.

    I think there’s an “update all albums” button somewhere, but there’s also simple instruction to “cron a get of url [X]” every X period of time so I did that. ¬†Next, I tested it and “voila!” 5 minutes later, and then a few hours later when Hunter tagged me in some pictures, the left-hand widget updated as if by magic.

    I guess this sort of thing excites me. ¬†May be old hat. ¬†I know raj got tikirobot auto-posting the twitter feeds so maybe he’s figured something like this out already too.

    neat, no?

  • rajbot 10:29 pm on April 12, 2009 Permalink | Log in to leave a Comment
    Tags: , , , code code, , , , ,   

    TikiTV In Action 

    Here are some old pics of Sam and Peliom vj-ing with the open source TikiTV software at the Timothy Leary Archives event at 111 Minna. Way fun!





  • rajbot 9:23 pm on March 28, 2009 Permalink | Log in to leave a Comment
    Tags: code code, , lighttpd, , , upstart   

    How to run lighttpd under upstart 

    Upstart is Ubuntu’s init.d replacement. It greatly simplifies writing init.d scripts and has a great respawn feature similar to daemontool’s supervise or monit. And it comes with Ubuntu by default.

    For some reason, almost no one uses upstart. Even Ubuntu’s services use traditional /etc/init.d scripts instead of upstart scripts. I think this might be due to upstart’s non-existent documentation. There is no man page for upstart, and multiple people I know who have read the online docs somehow missed the three important commands that control upstart jobs: /sbin/start, /sbin/stop, and /sbin/status!

    Here is how it works: put an upstart script in /etc/event.d. Let’s call it /etc/event.d/foo. This script is now immediately available under upstart. Just type sudo start foo. That’s it.

    I converted Ubuntu’s /etc/init.d/lighttpd script to a much shorter upstart script. The big advantage of this is upstart will restart lighttpd if it dies for some reason. This is what the upstart script looks like:

    #this is an upstart script that  starts lighttpd
    start on runlevel 2
    start on runlevel 3
    start on runlevel 4
    start on runlevel 5
    stop on runlevel 0
    stop on runlevel 1
    stop on runlevel 6
    exec sudo -u www-data lighttpd -D -f /etc/lighttpd/lighttpd-infobase.conf

    That’s it! Save this script as /etc/event.d/OL-lighttpd, and then type sudo start OL-lighttpd. You can kill off the lighttpd process and it will get restarted.

    If you want to configure your lighttpd to write out a pid file, you can use pre-start and post-stop script to prepare and clean up the pid file:

    #this is an upstart script that  starts lighttpd
    start on runlevel 2
    start on runlevel 3
    start on runlevel 4
    start on runlevel 5
    stop on runlevel 0
    stop on runlevel 1
    stop on runlevel 6
    pre-start script
        #make sure there is a place to write the pid file (optional):
        mkdir -p /var/run/lighttpd > /dev/null 2> /dev/null
        chown www-data:www-data /var/run/lighttpd
        chmod 0750 /var/run/lighttpd
    end script
    exec sudo -u www-data lighttpd -D -f /etc/lighttpd/lighttpd-infobase.conf
    post-stop script
        #remove pid file (optional)
        #add server.pid-file = "/var/run/lighttpd/lighttpd.pid" to lighttpd.conf file to have it generate the pid file
        rm -f /var/run/lighttpd/lighttpd.pid
    end script

    If you want to stop lighttpd, just type sudo stop OL-lighttpd. You can also type sudo initctl list for a list of all jobs under upstart.

  • shag 11:34 pm on March 1, 2009 Permalink | Log in to leave a Comment
    Tags: code code,   

    Fixing MTRRs on Linux 

    Some x86 computers have a buggy BIOS that can cause poor Linux graphics performance.¬† The problem is caused by the BIOS’s boot-time configuration of the Memory Type Range Registers (MTRR).¬† On modern systems, the graphics aperture should be configured as write-combining memory.¬† But buggy BIOSes will configure it as plain “uncacheable” memory.¬† The performance cost of the bug can be large: as an example, a ThinkPad T61 with Intel x3100 GMA¬† and T9300 CPU initially rendered ~120 fps in glxgears; this increased to ~580 fps with a fixed configuration.

    You can determine whether your system has the bug by examining your X server log (likely in /var/log/Xorg.0.log) and looking for a line similar to:

    (WW) intel(0): Failed to set up write-combining range (0xe0000000,0x10000000)

    The Fix

    The good news is that the MTRR can be fixed from the Linux command line via ‘cat’ and ‘echo’ and the /proc/mtrr file.¬† The bad news is that figuring out the right configuration is difficult and often involves some trial and error.¬† The process often involves converting cached memory to uncached memory, so it is best done in single-user mode, without X running.¬† The machine should be doing nothing else, otherwise the system is likely to slow to the the point of unusability.

    Try the Easier Fix First

    Someone may have already fixed the problem for your machine.¬† If they’ve posted it to the Internet, try their script first.¬† One sample is below, along with a¬† few links for other machines.¬† These may not work, since BIOS MTRR configurations can vary depending on the amount of memory in the system,¬† devices present, and BIOS revision.

    Then Try Figuring It Out

    Needed are Linux expertise, an understanding of hexadecimal arithmetic, and an understanding of the basics of how computers map memory. General steps:

    1. Use the error message from the Xorg log file (probably /var/log/Xorg.0.log) to determine the start address and size of the graphics aperture.  Example:
      (WW) intel(0): Failed to set up write-combining range (0xe0000000,0x10000000)
    2. Reorganize the MTRR settings (via /proc/mtrr) to mark the video chip’s memory as write-combining.¬† This generally involves:
      1. removing any MTRRs that overlap with the graphics region,
      2. adding a MTRR for the write-combining area, and
      3. breaking the removed MTRR into MTRRs that do not overlap the graphics aperture area.
    3. Create a script to replicate the MTRR settings (sample below) and run it during boot, e.g., from /etc/rc.local.

    The details are beyond the scope of this post, but here are some resources that may help.

    MTRR Fix For A Sample Configuration

    Here are the details of an MTRR fix for a Lenovo ThinkPad T61 (8897CTO) laptop with 4GB RAM and a 64-bit kernel.¬† Here’s the default BIOS MTRR configuration (cat /proc/mtrr):

    reg00: base=0xc0000000 (3076MB), size=1024MB: uncachable, count=1
    reg01: base=0x13c000000 (5056MB), size=  64MB: uncachable, count=1
    reg02: base=0x00000000 (   0MB), size=4096MB: write-back, count=1
    reg03: base=0x100000000 (4096MB), size=1024MB: write-back, count=1
    reg04: base=0xbf700000 (3063MB), size=   1MB: uncachable, count=1
    reg05: base=0xbf800000 (3064MB), size=   8MB: uncachable, count=1

    The problem is that the graphics card memory (which according to the Xorg log is at 0xe0000000) lies within the reg00 uncacheable region, but should be marked as “write-combining.”¬† After running this script:

    echo "disable=5" > /proc/mtrr
    echo "disable=4" > /proc/mtrr
    echo "disable=3" > /proc/mtrr
    echo "disable=2" > /proc/mtrr
    echo "base=0x0 size=0x80000000 type=write-back" > /proc/mtrr
    echo "base=0x80000000 size=0x40000000 type=write-back" > /proc/mtrr
    echo "base=0xe0000000 size=0x10000000 type=write-combining" > /proc/mtrr
    echo "base=0x100000000 size=0x40000000 type=write-back" > /proc/mtrr
    echo "base=0xbf700000 size=0x100000 type=uncachable" > /proc/mtrr
    echo "base=0xbf800000 size=0x800000 type=uncachable" > /proc/mtrr
    echo "disable=0" > /proc/mtrr
    echo "base=0xc0000000 size=0x20000000 type=uncachable" > /proc/mtrr

    … the MTRR configuration changes to:

    reg00: base=0xc0000000 (3072MB), size= 512MB: uncachable, count=1
    reg01: base=0x13c000000 (5056MB), size=  64MB: uncachable, count=1
    reg02: base=0x00000000 (   0MB), size=2048MB: write-back, count=1
    reg03: base=0x80000000 (2048MB), size=1024MB: write-back, count=1
    reg04: base=0xe0000000 (3584MB), size= 256MB: write-combining, count=2
    reg05: base=0x100000000 (4096MB), size=1024MB: write-back, count=1
    reg06: base=0xbf700000 (3063MB), size=   1MB: uncachable, count=1
    reg07: base=0xbf800000 (3064MB), size=   8MB: uncachable, count=1

    and glxgears frame rates are much higher.

    More Information

    Here are some samples that others have provided for other configurations:

    A technical discussion of write-combining: http://download.intel.com/design/PentiumII/applnots/24442201.pdf

    Linux /proc/mtrr documentation: http://www.mjmwired.net/kernel/Documentation/mtrr.txt

    • Hugh 1:28 pm on April 12, 2009 Permalink | Log in to Reply

      Recent kernels have an option enable_mtrr_cleanup that fixes this problem on many systems. Unfortunately, it doesn’t work for a friend’s Thinkpad x61t.

      I wrote a (userland) program to fix this problem. It does work on his and my x61t tablets. You can get it from

      I update the program once in a while. Newer versions will have newer dates in the tgz file name.

  • peliom 11:54 am on February 9, 2009 Permalink | Log in to leave a Comment
    Tags: , code code,   

    TikiTV Launched Party! 

    We had a fantastic live action demo of TikiTV last night at the reception party celebrating the launch of the Timothy Leary Video Collection. A lot of great people were asking about TikiTV, how it works, how to use it, how to get involved. This is the start of a good thing.

    Add your TIKITV comments/questions below and we will hook you up!

    • Larry Maloney 1:26 am on September 20, 2009 Permalink | Log in to Reply

      I saw TikiTv tonight at the HackerDojo grand opening, and my I was stunned. This thing is super cool!

      Mixing multiple video streams! It’s a MUST have for your next party. Super fun, and never the same. I just downloaded TikiTV and I can’t wait to mix in some of my family videos with emotional backgrounds.

    • jeicrash 2:20 pm on March 28, 2010 Permalink | Log in to Reply

      Any plans for windows or linux version? Also midi input support would be nice, like to use my korg nanokontrol to fade layers.

  • rajbot 1:38 pm on January 27, 2009 Permalink | Log in to leave a Comment
    Tags: , code code, opengl, , , ,   

    Announcing TikiTV, the Best Open Source VJ Software Ever! 

    TikiTV is an awesome open-source video mixing application for Mac OSX, developed by peliom and VJ Science. If you are a video nerd, you should check this out:

    • decodes 6 full-quality 720×480 MPEG-2 streams at 60fps
    • on screen preview of both 3-channel decks
    • fullscreen output to second display (vga projector)
    • rock solid 60fps output, no dropping frames
    • requires MacBook Pro 2GHz or higher

    You can download TikiTV here. For the video hax0rz out there, you can clone the the github repo.


  • tracey pooh 12:44 pm on November 16, 2008 Permalink | Log in to leave a Comment
    Tags: code code, , rectangular pixels, , widescreen   

    ffmpeg hook to aid with “rectangular pixels” 

    Tired of screen-scraping the output of ffmpeg and/or mplayer to get the parameters / clip info for a media file?

    This hook attempts to remedy this by printing simple information about the passed in video from the cmd line.

    It will also print out whether or not the clip is using “rectangular pixels”.

    WTF is a rectangular pixel?


    Well, the easiest examples are DVDs. ¬†You only want to buy a DVD if it says one of two standard phrases on it — “Enhanced for 16:9 TVs” or “Anamorphic widescreen“. ¬†They both mean the same thing — namely that the video on the DVD disk is wider than 4:3 aspect (pretty much all films are ratio 16:9 or even wider like ratio 2.35:1) *and* that it didn’t *waste* any DVD bytes by encoding “top/bottom black bars”. ¬†(If it doesn’t say those code phrases, it’s an older, crappy/low budget produced, or worse a “pan-n-scan” chopped film (where they lop off the left and right sides of each frame to fit into a 4:3 TV!) ¬†Worst yet, if the DVD only says something like “widescreen version”, though it sounds good, it means while they didn’t cut off the picture, they wasted 25% (or more!) of the pixels encoding “black bars” on the top and bottom. ¬†So you have less pixels in the DVD encoding the picture compared to an “anamorphic” version of the same thing. ¬†Hello crappy quality!)



    Anamorphic DVDs are encoded internally at 720×480 pixels per image.

    Now look at this:

    ¬†¬†4:3 video == 1.33 ratio == 640×480¬†pixels image

    16:9 video ¬†== 1.78 ratio == 854×480 pixels image

    so what is 720×480? ¬†it’s almost perfectly in the middle of those two — math: 720/480 = 1.5

    So the “encoded transport image size” is neither 16:9, nor 4:3 — it’s right in the middle, capable to encode either a 4:3 video (like non-high def TV, many computer screens) or 16:9 video (high def TV, some digital video). ¬†I like to think of it as “how the formats that came out right before HDTV took off, compromised to hedge there bets to be between 4:3 and 16:9″.

    The final critical bit of information about a DVD is the “aspect ratio” (not of the overall image, confusingly, but of each encoded *pixel*!) ¬†This says “wait, this pixel isn’t square, literally, like you’d think if you just read the image — it’s actually supposed to be stretched to make the *overall image* either 640 pixels wide (squoosh down from 720) or 854 (stretch wider to 854). ¬†So the video track is “flagged” with a “pixel aspect” (often referred to as PAR (Pixel Aspect Ratio), SAR (Sample Aspect Ratio), or DAR (Display Aspect Ratio) — some of those have some slight nuances/differences, but that’s digging too deep). ¬†Anyway, neat, huh? ¬†Your anamorphic DVD is a changeling! ¬†(maybe “anamorphic” makes more sense mnemonically now 8-) ¬†(PS: “DV” video — the most common format that digital camcorders that write to tape use — is similar. ¬†720×480 pixels/image plus a “flag” for what each pixel “shape” is).


    This hook I wrote will output information about the clip (like “ffmpeg -i” will do, but in a format easier to parse) as well as information about the pixels (unlike ffmpeg). ¬†So you can then know more about a clip if you are going to do things like pull single frames/thumbnails from it or convert it to another format. ¬†We have been using this at Internet Archive for our movies for over a year and a half now and it works great!


    C Code is here.  

    There are some instructions for compiling with an ffmpeg source at the top of the C code. ¬†(There is also an ubuntu-on-AMD compiled “.so” at that link by changing the suffix from “.c” to “.so”, FWIW)


    Example invocation and output:

    ffmpeg -vhook "/petabox/deriver/identify.so oldpresidio.mpeg"
    FFmpeg version SVN-rUNKNOWN, Copyright (c) 2000-2007 Fabrice Bellard, et al.
      configuration: --enable-gpl --enable-pp --enable-libvorbis --enable-libogg --enable-liba52 --enable-libdts --enable-dc1394 --enable-libgsm --disable-debug --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-x264 --prefix=/usr/
      libavutil version: 49.3.0
      libavcodec version: 51.38.0
      libavformat version: 51.10.0
      built on Nov 30 2007 19:09:20, gcc: 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)
    Video: mpeg2video, yuv420p, 720x480, q=2-31, 8000 kb/s
    width: 720
    height: 480
    aspect: 32/27
    fps: 29.97
    duration: 00:03:05.2
    audio: true
    Failed to Configure /petabox/deriver/identify.so
    Failed to add video hook function: /petabox/deriver/identify.so oldpresidio.mpeg
    The “failure” above at the end is deliberate/OK (it just makes sure ffmpeg stops and doesn’t try to transcode).
    So we see that this video clip:
    http://www.archive.org/download/',loop:false,controlBarBackgroundColor:'0x000000',splashImageFile:'oldpresidio/oldpresidio.thumbs/oldpresidio_00000001.jpg'}" />http://www.archive.org/download/',loop:false,controlBarBackgroundColor:'0x000000',splashImageFile:'oldpresidio/oldpresidio.thumbs/oldpresidio_00000001.jpg'}">
    is indeed widescreen by the “aspect” line above (indicating the pixels are rectangular, not square) with value 32/27.
    If we multiply the encoded image width of “720″ by 32 and divide by 27, we get the magic/correct 853.33 (round up or down to nearest pixel).
    We use this utility at Internet Archive to make user friendly formats like “h.264 .mp4″ videos and “Ogg Theora .ogv” videos that get converted to the proper square pixel equivalent (and *not* messup widescreen videos 8-)
    • Kai 6:45 am on April 3, 2009 Permalink | Log in to Reply

      Looks like this is what we have been looking for. I am not entirely sure though how to install it. Just downloaded the ffmpeg trunk (0.5) and it looks like there is no more vhook directory. Can you provide step by step instructions? MUCH appreciated, thanks.

    • tracey pooh 11:47 am on April 7, 2009 Permalink | Log in to Reply

      hi kai! have you been able to build it? if you are using a mac, here’s another post you may find interesting that shows how i built it on both a PPC (work) and intel (home) mac with OS-X…


      at any rate, the great thing about ffmpeg v0.5 is that, so far, in all the testing i have done, it seems to *finally* report the PAR (Pixel Aspect Ratio) for an item that has “rectangular pixels” right on the command line. so we at archive.org should soon no longer need to use this “vhook” and the C code that i wrote, yay!

      so, for example, here is the new output from v0.5 ffmpeg on the same video file above now:

      ffmpeg -i oldpresidio.mpeg
      Duration: 00:03:05.71, start: 0.290656, bitrate: 8070 kb/s
      Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720×480 [PAR 32:27 DAR 16:9], 8000 kb/s, 29.97 tbr, 90k tbn, 59.94 tbc
      Stream #0.1[0x1c0]: Audio: mp2, 48000 Hz, stereo, s16, 384 kb/s

      you can parse the PAR output from the output, when it exists, to know that the pixels in the video are rectangular.

      so, from some PHP code that we use at archive.org:
      $filmWidth = round($filmWidth * $parW / $parH);
      we would plug in our numbers from the ffmpeg info/output to be
      $filmWidth = round(720 * 32 / 27)
      which comes out to round(853.33333…)
      or 852 (typically want to round to an even number)

  • rajbot 5:13 pm on November 8, 2008 Permalink | Log in to leave a Comment
    Tags: code code, ,   

    Showing Twitter Status On Your Group Blog 

    First a bit of meta discussion: I changed the left sidebar of TikiRobot to show the five most recent status updates from the fellow robots! It was kind of silly to display tweets from people who haven’t updated since last year. If you want your status to show up in the sidebar, and it isn’t there for some reason, lemme know!

    Twitter has an API for showing for retrieving the most recent status updates from your friends, so I created a new twitter user to follow the tikirobot posters. Then, I used this python script to fetch the friends timeline and spit out html for the blog. I wanted to do this all in javascript, but the the friends timeline api needs authentication, and I didn’t want to publicize the twitter account password. Plus, this way I can cache the data on our server, so it doesn’t hit twitter so often.

    #Copyright(c)2008 TikiRobot.net - Software license GPL version 3.
    import time
    import datetime
    import os
    import fcntl
    import sys
    import codecs
    import re
    # CalculateRelativeTime()
    def CalculateRelativeTime(str):
        dt = datetime.datetime.strptime(str, '%a %b %d %H:%M:%S +0000 %Y')
        now = datetime.datetime.utcnow()
        delta = now - dt
        secs  = delta.seconds
        if (delta.days > 0):
            return "%d days ago"%(delta.days)
        elif (secs < 60):
            return "%s seconds ago"%(int(secs))
        elif (secs < 3600):
            return "%s minutes ago"%(int(secs/60))
            return "%s hours ago"%(int(secs/3600))
    # main()
    print """Content-type: text/javascript; charset=UTF-8\n"""
    useCachedRss = True
    cacheFile    = 'cache/twitter.json'
    sys.stdout = codecs.getwriter('utf8')(sys.stdout)
        stats = os.stat(cacheFile)
        if (time.time() - stats.st_mtime) > 300:
            #cache expired
            useCachedRss = False
    except OSError:
        #cache file not found
        useCachedRss = False
    if useCachedRss:
            #print "reading cached json"
            fh = open(cacheFile, 'r')
            fcntl.lockf(fh, fcntl.LOCK_SH)
            contents = fh.read()
            fcntl.lockf(fh, fcntl.LOCK_UN)
            print "got some kind of error when reading cached json. quitting"
            #print "fetching new json"
            import urllib2
            apiurl = 'http://twitter.com/statuses/friends_timeline.json?count=5'
            auth_handler = urllib2.HTTPBasicAuthHandler()
                realm='Twitter API', 
            opener = urllib2.build_opener(auth_handler)            
            urlfh = urllib2.urlopen(apiurl)
            contents = urlfh.read()
            fh = open(cacheFile, 'w')
            fcntl.lockf(fh, fcntl.LOCK_EX)
            fcntl.lockf(fh, fcntl.LOCK_UN)
            print "got some kind of error when fetching twitter timeline. quitting"
    import json
    timeline = json.read(contents)
    htmlstr = ""
    for i in range(5):
        name = timeline[i]['user']['screen_name']
        imgurl = re.sub(r'_normal.(jpg|png)$', r'_mini.\1', timeline[i]['user']['profile_image_url'])
        twit   = re.sub(r'\'', '&#39;', timeline[i]['text'])
        twit   = re.sub(r'\n', ' ', twit)
        twit   = re.sub(r'http://tinyurl.com/(\S+)', r'<a href="http://tinyurl.com/\1">tinyurl.com/\1</a>', twit)
        when   = CalculateRelativeTime(timeline[i]['created_at'])
        htmlstr += """<div class="twitter">"""
        htmlstr += """<img src="%s"/>"""%(imgurl)
        htmlstr += """<div class="twitterRight">"""
        htmlstr += """<a href="http://twitter.com/%s"><strong>%s</strong></a>: """%(name, name)
        htmlstr += twit
        htmlstr += """<span class="twitterMeta">- """ + when + "</span>"
        htmlstr += """</div>"""
        htmlstr += """<div class="clear"></div>"""
        htmlstr += """</div>"""
    print """var tikiTwits = '"""+htmlstr+"';"
    print """document.write(tikiTwits);"""

    It’s fun to see that Twitter still uses the Javascript API that I proposed to them over the phone a couple years ago.

    Also, dear lazyweb: please send my a python function that converts a UTC date string to PST. Thanks!

    • may 10:55 am on November 10, 2008 Permalink | Log in to Reply

      yay! maybe this will get me to twitter more than once every couple months :-)

  • tracey pooh 6:38 pm on November 6, 2008 Permalink | Log in to leave a Comment
    Tags: archive.org, code code, ogg, theora,   

    geek stuff from archive.org — making Ogg Theora videos 

    Fast, reliable way to encode Theora Ogg videos using ffmpeg, libtheora, and liboggz

    archive.org has started to make theora derivatives for movie files, where we create an Ogg Theora video format output for each movie file.   after trying a bunch of tools over a good corpus of wide-ranging videos, i found a neat way to make the Archive derivatives.

     High Level:

    • use ffmpeg¬†to turn any video to “rawvideo”.¬†
    • pipe its output to *another*¬†ffmpeg to turn the video to “yuv4mpegpipe”.
    • pipe its output to the¬†libtheora¬†tool.¬†
    • for videos with audio, ffmpeg create a vorbis audio .ogg file.¬†
    • add tasty metadata (with liboggz utils).¬†
    • combine the video and audio ogg files to an .ogv output! ¬†¬†

    Detailed example:  

    /usr/bin/ffmpeg -v 0 -an -deinterlace  -s 400x300 -r 20.00 -i CapeCodMarsh.avi -vcodec rawvideo -pix_fmt yuv420p -f rawvideo -  | \
    /usr/bin/ffmpeg -v 0 -an -f rawvideo   -s 400x300 -r 20.00 -i - -f yuv4mpegpipe -  | \
    /petabox/deriver/libtheora-1.0/lt-encoder_example --video-rate-target 512k - -o tmp.ogv;
    /usr/bin/ffmpeg -y -i CapeCodMarsh.avi -vn -acodec vorbis -ac 2 -ab 128k -ar 44100 audio.ogg;
    /petabox/sw/bin/oggz-comment audio.ogg -o audio2.ogg TITLE="Cape Cod Marsh" ARTIST="Tracey Jaquith" LICENSE="http://creativecommons.org/licenses/publicdomain/" DATE="2004" ORGANIZATION="Dumb Bunny Productions"  LOCATION=http://www.archive.org/details/CapeCodMarsh;
    /petabox/sw/bin/oggzmerge tmp.ogv audio2.ogg -o CapeCodMarsh.ogv;


    • Why the double pipe above? Some videos could not go directly to yuv4mpegpipe¬†format such that libtheora (or ffmpeg2theora) would work all the time.
    • We do the vorbis audio outside of libtheora (or ffmpeg2theora) to avoid any¬†issues¬†with Audio/Video sync.
    • We convert to yuv420p in the rawvideo step because ffmpeg2theora has (i think) some known¬†issues of not handling all yuv422 video inputs (i found at least a few videos that did this).
    • We add the metadata to the audio vorbis ogg because adding it to the video ogv file wound up making the first video frame not a keyframe (!)

    So this will end up working in Firefox 3.1 and greater — the new HTML “video” tag:

    <video controls=”true” autoplay=”true” src=”http://www.archive.org/download/commute/commute.ogv”&gt; for firefox betans </video>

    This technique above worked nicely across a wide range of source and “trashy” 46 videos that I use for QA before making live a new way to derive our videos at archive.org (¬†http://www.archive.org/~MY-FIRST-NAME/_/stream.php ¬†[sorry don't necessarily want all that crawled by non rajbot robots] )

    -tracey jaquith ¬† “don’t make me 3:2 pulldown you”

    • Adam Rosi-Kessel 5:44 am on November 7, 2008 Permalink | Log in to Reply

      I’ve always used ffmpeg2theora without any A/V sync problems — that would seem to be the much simpler option where it works. Are there certain conditions you’ve found where ffmpeg2theora fails?

    • tracey jaquith 11:43 am on November 7, 2008 Permalink | Log in to Reply

      absolutely i can find A/V sync issues quite easily, unfortunately.

      now, granted, there are likely some issues with the encoding of these videos as inputs to being with — but these aren’t uncommon w/ the stuff that we get uploaded to archive.org.


      both of these fail to sync A/V:
      ffmpeg2theora amoaLauraMTV.wmv -sync -o out.ogv
      ffmpeg2theora amoaLauraMTV.wmv -o out.ogv

      using my technique above, we sync properly.
      it could be just that ffmpeg is more forgiving when dealing with “trashier” encodings…

    • shag 1:47 pm on December 18, 2008 Permalink | Log in to Reply

      yow, 3:2 pulldown, sounds humiliating ;-)

    • Gregory Maxwell 2:15 pm on February 7, 2009 Permalink | Log in to Reply

      Please do not use the above instructions unless you want to be accused of intentionally making Vorbis look bad. The FFMPEG internal Vorbis encoder is not something anyone should actually use. The sound quality is terrible.

      I suspect most people (myself included) were unaware of FFMPEG’s internal Vorbis encoder because just about everything else uses the (BSD licensed) Xiph.Org reference encoder.

      The above commands should be changed to use “-acodec libvorbis” rather than “-acodec vorbis”:

      ffmpeg -y -i CapeCodMarsh.avi -vn -acodec libvorbis -ac 2 -ab 128k -ar 44100 audio.ogg

      This is not audio-geek nitpicking: The above “128kbit” FFMPEG produced audio sounds worse than 32kbit/sec output produced from a reasonable encoder.

      In order to make this point more clearly I have posted a couple of 11 second examples. First listen to a 64kbit/sec example produced by Xiph.org libVorbis. Then listen to the “128kbit/sec” FFMPEG output (which is really about 64kbit/sec for this input). As you can see, the FFMPEG output sounds very bad in both absolute and comparative terms. Even 32kbit/sec audio produced by a decent encoder sounds much better than the ffmpeg output.

    • tracey jaquith 4:43 pm on February 19, 2009 Permalink | Log in to Reply

      thanks for the info.

      couple quick things. the sound is not very good agreed — but i would personally not say “terrible”.

      we are looking into altering our technique to use libvorbis, but i thought i should point out that not everyone is using the most recent version of linux as i suspect you may be? archive.org is still stuck on “gutsy” version of ubuntu which is from oct 2007. so even with “–enable-libvorbis” compiled into gutsy-era ubuntu, there is no known codec alternative other than “-acodec vorbis” that can do vorbis.

      we are more likely to do an OS upgrade and try to update it at that point.

      so i don’t disagree with you, i just think the severity of the warning is a tad higher than need be. likely we’ll disagree about that but that’s ok.

      thx for the pointer!

  • rajbot 10:35 am on August 20, 2008 Permalink | Log in to leave a Comment
    Tags: code code, lolcode, parrot, ,   

    I can haz complier?????? 

    Here is a lightning talk by Partick Michaud at YAPC::Europe 2008 in Copenhagen, demonstrating the Parrot Compiler Toolkit. Live demo: a LOLCODE compiler. wut?

    It would be really nice if we can take all these languages, pass them through Parrot, and translate them to other languages. I decided that wasn’t good enough. Let’s just translate them to LOLCODE.


  • rajbot 4:33 pm on May 28, 2008 Permalink | Log in to leave a Comment
    Tags: code code, , flowplayer,   

    How To Compile FlowPlayer on Ubuntu 8.04 

    Here is how I compiled FlowPlayer on Hardy:

    sudo apt-get install ant
    sudo apt-get install java-gcj-compat-dev
    #without java-gcj-compat-dev, ant throws this error: Unable to locate tools.jar. Expected to find it in /usr/lib/jvm/java-1.5.0-gcj-4.2-
    sudo apt-get install mtasc
    #need old version of swfmill; version in repo is 0.2.12
    wget http://swfmill.org/releases/swfmill-0.2.11.tar.gz
    tar xvzf swfmill-0.2.11.tar.gz
    cd swfmill-0.2.11
    #install in /usr/local/bin
    sudo make install
    cd ..
    wget http://internap.dl.sourceforge.net/sourceforge/flowplayer/flowplayer-2.2-src.zip
    unzip flowplayer-2.2-src.zip
    cd flowplayer-src
    emacs build.properties #edit DEPLOY_DIR
  • rajbot 12:58 am on March 5, 2008 Permalink | Log in to leave a Comment
    Tags: BeautifulSoup, code code, , , , , YouTubeFilter   

    Announcing YouTubeFilter 

    YouTubeFilter is a simple tool that scrapes the MetaFilter RSS feed and embeds the YouTube videos inline. I wrote it to make it easier to find cool videos to watch on my Wii.

    Unfortunately, the Wii runs out memory when loading YouTubeFilter! And of course, Firefox bugs on the mac prevent some of the embedded videos from showing up unless you resize the window just right. Stupid firefox.

    The code is checked into SourceForge. I use Beautiful Soup for parsing the RSS. Someone please help me make it work on the wii!

  • rajbot 11:10 pm on February 23, 2008 Permalink | Log in to leave a Comment
    Tags: code code, flashcards, , , TikiCards   

    Announcing TikiCards: Flashcards for the Web 

    sweet.pngI was inspired by peliom’s web-2.0 Japanese flashcards, so I made some Hindi flashcards this weekend. Or rather, I made an open-source framework for javascript-powered flashcards called TikiCards, and pre-populated it with vocabulary words from the awesome Door Into Hindi lessons that I’ve been working on. I’ll work on adding more words and more languages soon. The code is checked in here.

    Unfortunately, Firefox on the Mac doesn’t ship with a Devanagari fonts, and it doesn’t use the OS X system font, so all the characters show up as question marks. And unlike peliom’s Japanese flashcards that work great on the iPhone, the Devanagari characters show up as square boxes on the iPhone. So if you want to use these for Hindi, use Safari on a Mac or FF on unix.

    Anyway, check it out and let me know what you think.

    • rajbot 2:41 am on February 24, 2008 Permalink | Log in to Reply

      I added a multiple-choice mode to make it easier to study without having to type (iPhone mode).

      Also, this will help with the forward translation mode (english-to-hindi or english-to-japanese).

      BTW, does the iPhone have japanese text entry mode? If not, multiple-choice will help a bunch..

    • rajbot 2:42 am on February 24, 2008 Permalink | Log in to Reply

    • may 11:07 pm on February 25, 2008 Permalink | Log in to Reply

      neat! i don’t know if the iphone supports input for asian languages yet – i don’t think it does

    • mangtronix 8:22 pm on February 26, 2008 Permalink | Log in to Reply


  • rajbot 1:57 am on January 29, 2008 Permalink | Log in to leave a Comment
    Tags: , code code, , , ,   

    Announcing ChatBubble! 

    I’ve finally made it easy to post good-looking iChat transcripts to the blog! We use CSS to style DIVs to look like iChat speech balloons.

    Cool! Where I can get the CSS?

    All the code is checked into SourceForge. You can browse it here.

    But how does it work?

    A brief description is here. Scott Schiller came up with the Even More Rounded Corners technique that we use. There is a CSS file to include and a python script that turns transcripts into html that you can paste into a blog post. We need more documentation, CSS cleanup, cross-bowswer support, and more speech balloon colors, if you feel like contributing patches.

    Doesn’t WordPress completely bork the formatting in Safari by adding unmatched </p> tags?

    Yup! WordPress is crap! You can use the wp-unformatted plugin to disable autop() on posts that contain ChatBubbles.

    • Dave C. 1:26 pm on February 12, 2008 Permalink | Log in to Reply

      Thanks for putting this together. I’ve been looking for an easy way to display ichat logs inside of blog entries. I think this will do nicely if I can get it working.

      I’m having trouble understanding the example syntax — do you run it against a log file or against a bunch of message text entered as an arg?

    • rajbot 4:16 pm on February 12, 2008 Permalink | Log in to Reply

      Hi Dave,

      You can run the formatter tool by passing message text as CLI args:
      python formatDivs.py “raj: hi guys” “may: hi!” “peliom: hi!!”

      I’ll make a web front end soon, but first I have to make it work on IE7. Apparently the bubbles don’t work on IE, and I only have Mac and Linux machines right now, so I won’t be able to figure out what’s wrong until I borrow a windows machine.

    • Michael Sharman 11:24 pm on October 22, 2008 Permalink | Log in to Reply

      Hi guys,

      Great work but a small piece of feedback. There are several problems with this in IE7, mainly around the blue speech bubble not aligning correcty which throws out the bottom bg image.

      Is there updated code for this?


  • rajbot 12:34 am on December 21, 2007 Permalink | Log in to leave a Comment
    Tags: Arduino, avr, breadboard, code code, Diecimila, , , uDuino   

    uDuino: low-cost Arduino 

    Tymm has developed a low-cost, breadboard-based Arduino. His Diecimila-compatible design separates the programming adapter (which you only need one of) from the Arduino board to keep costs down.

    So after an initial investment of under $25, you can put together cores for breadboard-based Arduino prototypes for $8-10… the Diecimila auto-reset works… and you actually get 2 I/O pins out of the deal

    Cool project, Tymm!

  • rajbot 4:38 pm on December 20, 2007 Permalink | Log in to leave a Comment
    Tags: code code, , exiftool, , identify,   

    How To Quickly Find the Size of an Image 

    To find the size of an image, I usually use ImageMagick’s identify command. Unfortuantely, identify is horribly slow, especially for JPEG 2000 images (thanks to a very slow libjasper).

    So instead of using identify:
    identify -format "%wx%h" image.jp2

    Use exiftool instead:
    exiftool -s -s -s -ImageSize image.jp2

    exiftool is 62.5 times faster(!!!) than identify for finding image size on my dual 2.0Ghz Athlon.

  • shag 12:41 pm on December 20, 2007 Permalink | Log in to leave a Comment
    Tags: , code code, hotsync, , palm,   

    Bluetooth HotSync your Palm with Linux 

    Here is a script, pilot-xfer-bt, that will HotSync your Palm over Bluetooth on Linux. I use a Treo 700p here, but this will probably work with other Palms with Bluetooth.

    First, make sure your host computer and Palm are paired. Then make sure your copy of rfcomm is at least version 3.23 or later by running ‘rfcomm –help’. As of the writing of this post, very few machines are using this rev, since it is recent. If yours is older, either try updating your bluez-utils package, or download, compile, and install the latest bluez-utils source. All you need is the ‘rfcomm’ binary.

    Then edit that pilot-xfer-bt script and make sure that its internal path to rfcomm points to the install directory. /usr/local/bin/rfcomm is probably what you want if you installed from source.

    Anyway, to use: pilot-xfer-bt passes its args to pilot-xfer. So in other words, to sync your Palm to a repository located in /home/username/palm, run ‘sudo ./pilot-xfer-bt -s /home/username/palm’.

    Getting this thing working a few months ago was a bear. There was this nasty timing-dependent bug in rfcomm vs. udev. That took days to puzzle through. At least it’s been fixed upstream.¬† Anyway, I now know why they call these tools “bluez”

    • Carl Witty 12:26 pm on June 10, 2008 Permalink | Log in to Reply

      Thank you! This is very nice (much simpler than setting up PPP and doing a network sync).

      One small fix: it doesn’t work with filenames/database names containing spaces; you need to change $* to “$@” .

      Here’s a patch to do this, which will almost certainly be horribly mangled by the blog posting software and unusable:

      — pilot-xfer-bt~ 2008-06-10 10:58:00.000000000 -0700
      +++ pilot-xfer-bt 2008-06-10 11:58:43.000000000 -0700
      @@ -64,7 +64,7 @@
      # Start pilot-xfer when the Palm device connects.
      # Remove the HotSync service record from sdpd now that we are done with it

    • shag 8:15 am on June 23, 2008 Permalink | Log in to Reply

      thanks very much Carl, I’ve updated the script and credited you. happy to hear that it came in useful for at least one other person :-)

  • shag 2:04 pm on December 19, 2007 Permalink | Log in to leave a Comment
    Tags: , code code, , , , pppd,   

    Bluetooth Networking with Treo 700p and Linux 

    Perhaps you are the owner of a Treo phone (or really just any Bluetooth phone) and wish to use it as a Bluetooth “modem” with your Linux box with EVDO or EDGE or HSDPA or whatever.

    Maybe you have spent hours trying to get the devices to connect, only to see a weird “Bluetooth connection starting…” message on your Treo, but nothing else happens. (Caused by connecting to the OBEX RFCOMM channel, not the DUN RFCOMM channel.)

    Or maybe you have spent hours trying to figure out why your Treo complains “ERROR” “ERROR” “ERROR” to your Linux box when you send it an AT command. (Caused by connecting to the headset audio RFCOMM channel, not the DUN RFCOMM channel.)

    Or maybe you haven’t yet wasted any moments of your rapidly dwindling life on this crap at all and Just Want Things To Work and Don’t Understand Why They Don’t.

    If any of this sounds familar, maybe this script, start-bt-modem, may help.

    You will need to save it to your local disk, and edit it to replace the “aa:bb:cc:dd:ee:ff” with your phone’s Bluetooth address. To find this, go to your Treo’s Bluetooth application, and change the “Visible” dropdown to “Temporary.” Then on your Linux box, run “hcitool -i hci0 scan” – your phone’s Bluetooth address should appear.

    You also need to already have paired your phone with your computer before running the script. To do this, I suggest making your computer discoverable and doing the scan from your Treo, rather than the other way around. Probably the easiest way on your Linux box is to use the GUI tools for this. Look for a weird-looking B in a dark blue oval on your menu bar (aka panel). If it isn’t there, you can try running ‘bluetooth-applet’ from a terminal. (This is part of the bluez-gnome 0.8 package on my machine). Right-click on the panel Bluetooth icon, choose “Preferences”, select “Visible and connectable for other devices”, and click “Close.” Then you will need to go to the Bluetooth application on your Treo, click “Setup Devices”, click “Trusted Devices”, click “Add Device”, click on your computer’s name, and click “Okay”. You’ll go through the pairing process – your Treo will display a number – make sure you enter the same number on your Linux box when the dialog box pops up. Then you should be set.

    Finally, you will need to run the script as root – for example, ‘sudo sh ./start-bt-modem’. If all goes well, you should see some IP addresses appear after about 30 seconds.

    I guess if you don’t use Verizon, you might also have to edit that “#777″ that appears at the bottom of the script.

    • Wargpath 8:02 pm on November 29, 2008 Permalink | Log in to Reply

      BOOM! Worked on the first try! Been looking for this after a few aborted attempts getting BT dun working. Thanks!

compose new post
next post/next comment
previous post/previous comment
show/hide comments
go to top
go to login
show/hide help
shift + esc