Technology and Creativity - Q's Brain Dump

Notes, thoughts, questions, musings of Daniel Quaroni

A Quick Note About Guice and Mybatis

| Comments


Tags: programming java guice

I discovered that the guice/mybatis combination requires that the DAO configuration be a singleton. That makes sense. By default every time you ask guice for an instance it creates a new one. That also makes sense.

What doesn’t make sense is why the MybatisModule isn’t annotated with @Singleton as documented here

It’s up to the programmer configuring the data source to do it, which is odd. What’s also odd is that when you use the default Mybatis connection pool everything’s fine when not using a singleton. If you try to use c3p0 or bonecp without singleton, however, you’ll find that mybatis opens a block of new connections for every query.

Visualization of Entropy

| Comments


Tags: programming

Let’s dig up an old project I worked on several years ago…. It’s something I’d love to get back to spending some more time on and being smart about it. The idea was inspired by Vincent van Gogh’s Starry Night. There’s so much chroma noise in the painting, but it forms patterns that make sense to us. I wondered if I could make a computer do something like that somehow? I spent a little time taking a totally naive stab at it. The basic idea is to take some sample pictures then paint another picture from them by statistically weighted random sampling. It just iterates along each column, looks at the colors painted so far next to it, and picks a color that has been observed next to them. Here are the input files I gave it (all put together into one for the purpose of this post):

The pictures to take samples from

And what it ends up producing is a little bit of entropy:

Obviously the program had no sense of the lines that need to flow through the images. The influence of the vertical scan direction is obvious in the output. It’s not enough to work with nearby colors alone. I’ve also worked on some edge detection software, so eventually I want to put the two together and see what I get.

I Give Up, Microsoft. Looks Like DirectShow Is Dead.

| Comments


Tags: programming

On Sunday the crew and I (Bryan and Anthony) tried to record commentary for the latest GTC race, a 1.5 hour enduro at Silverstone. Bryan had reinstalled his operating system and had upgraded it to Windows 8.1 and the RaceViewer software I’d written that lets me remotely start all of our videos at once so we can comment in synch wasn’t working on his system. I tried it out on my Windows 8.0 machine and it was crashing for me (which isn’t quite what happened to him – I think he basically saw nothing happen at all). It was still running fine on Windows 7.

I loaded it up the solution in Visual Studio on my Win8 machine and discovered that the DirectShow filters that I used to use (Specifically in the Microsoft.DirectX.AudioVideoPlayback Assembly) don’t exist in Windows 8. I recall having to install a DirectX runtime or update or sdk to get them on Windows 7, but I tried installing something like 3 different things and they either wouldn’t install (1 due to me already having a version of DX later than it was, and 1 due to an error with some code I didn’t bother to investigate during installation) or they didn’t add the AudioVideoPlayback assembly. Eventually I got sick of banging my head against that wall and tried going a different route. Now I’m loading a Windows Media Player instance into my form. I don’t like it as much because WMP seems to start playing the file when I call axWindowsMediaPlayer1.URL = fileName and I can’t seem to stop it despite trying to call axWindowsMediaPlayer1.Ctlcontrols.pause(). With the old DirectX player I could load the video then call play when I was ready, which guaranteed minimum delay between me pushing the start button on my computer and Bryan and Anthony’s video players starting. Now with WMP there could be a small delay as their systems load the video then start playing.

But Bryan reports to me that he can load the new RaceViewer, so that seems to have solved the problem. We’ve got a backlog of 2 (Well, 3 if you count this week’s race that we didn’t get a replay file of and won’t bother recording anyway since it was pretty poorly attended) races to record so we need to get to it.

Is Adobe Creative Cloud Good

| Comments


Tags: business opinion

As a Photoshop licensor (not owner!) I’ve gotten various emails over the course of the year from Adobe advertising their Creative Cloud and incentives to sign up. My first reaction was “You’ve got to be kidding.” Wait, I can pay you more than I used to pay you for the software I use, and if I ever stop paying I can’t use my software anymore? I’d ask where I can sign up, but they’ve sent enough emails informing me that I know the exact answer. That’s a reaction that the internet in general seems to have had, but there seem to be some people who have embraced it.

There are three fundamental issues that people in general and I have with it: – It costs more. Let’s ignore the introductory year pricing for a moment. There are two ways it costs more. First, in the past you were always able to upgrade at your leisure and skip 2 or 3 versions between upgrades and still be able to purchase for the upgrade price. – You could purchase the original product and upgrades in sales. They may not happen often but you could occasionally get a good deal, and coupled with an every-other-year upgrade cycle you end up spending on the order of $75/year on Photoshop. – You could opt to not upgrade and continue to use the product until you finally upgraded to a computer with an operating system that simply would not support it. Figure that’s a good 10 years after the initial release of the version. That works out to $7.50/year if you want to think of it that way. – The end of discounts. Adobe no longer has any incentives to offer you any discounts ever again. They used to do it to get bargain shoppers to pay something for the current upgrade rather than skipping it and waiting for the next. Now it’s pay or your software deactivates. Once you’re hooked Adobe can set any price they want because you’re just renting the software now rather than purchasing a license, and if you stop paying you get evicted.

So now we’re faced with Photoshop costing $120/year and 10 years from now you’ll have dropped $1200 on Photoshop. And you KNOW you’ve been paying that whole time because if you don’t, you don’t get to use Photoshop!

So far I’ve only mentioned the incremental upgrade cost of Photoshop, which reveals the point where the Creative Cloud may reveal itself to have some value. If you don’t currently own Photoshop it’ll cost you $650 to purchase CS6. That’s distinctly not cheap and its high barrier-to-entry-price extends across the Adobe professional product line (Illustrator, After Effects, Premier Pro). With Creative Cloud you can spend just $240/year for each of these products individually, which means you’ll break even vs the previous entry cost in year 3, and you’ll already get the upgrade that you would have wanted to buy by then so really you’re break-even into year 4 of Creative Cloud single program. Then the economies of Creative Cloud turn in Adobe’s favor and you’re well hooked on their product and taking you for all you’re worth.

Hey, did you think $650 was a lot to spend on a single product? Well try $2500 for the Master Collection Creative Suite 6. That’ll get you all the big programs I’ve mentioned plus a couple others and it’ll sting pretty badly when it comes time to pay off that credit card bill. How does the Creative Cloud compare? Well now things get interesting. If you own a license to even the lowly Photoshop CS3 you qualify for the $360/FIRST YEAR price and have access to pretty much everything Adobe makes. At that price this is a great deal for both parties. Users get access to a ton of top-tier software for a reasonable price and Adobe gets a guaranteed revenue stream in perpetuity. Unfortunately the price jacks up to $600/year after the first year. That’s still a lot of money, but it’s still ultimately worth it. However, it definitely swings the economics back stronly in Adobe’s favor.

What do I think? I think the right way to go is to give people something palatable to keep them subscribing every year without a second thought. $360/year from every user one of your 8.4 million creative suite users according to cnet plus more from what I’m certain is a small army of photoshop-only subscriptions is a pretty tidy revenue stream, I’d say. For reference, in 2012 Adobe had $4.4 billion in revenue. So if every one of the 8.4 million CS users signed up for CC at a $360/year price, they’d have $3 billion in revenue, plus extra from other people such as my self upgrading themselves they’d easily exceed their 2012 earnings. However on JUST the complete CC users at $50/month will earn them $5 billion/year, so suddenly you see how this ends up being a big win for Adobe.

Git Your Act Together

| Comments


Tags: programming

A couple weeks ago we switched from svn to git for our version control. My initial reaction was that it was acccomplishing the same thing as we were doing before only with more commands that made less sense. Oh, also fear. I was very afraid after having read various comments in helpful articles about git that I should make sure never to do some commands that I didn’t understand or I could erase history or make life miserable for everyone else using the remote repository.

The fear drove me to approach with trepidation. We had a wiki page explaining how to use git with the workflow we’d been using with svn and so I followed that for a while and got comfortable with it. I still managed to enter some command sequences that left me calling for help from our local git experts every now and then.

Yesterday I ran into my first conflict by stashing, pull —rebase-ing and then popping off the stash. This was the equivilent of svn updating my source prior to checking in. The conflict was no big deal (someone had turned an if else into a turnery) but after I resolved it in the file git still marked it as red when I ran gig status, indicating that it was conflicting . I don’t recall the exact output from the status but I think it wanted me to add the file in order to indicate that the conflict was resolved, but I was on my local master and not ready to check this file in. (I wanted to make a quick change to a properties file and commit/push that) I called for help and after some time it became evident that my brief success with svn-ized git needed to come to an end and I needed to dive head first into the true gitflow. Using it like svn got me comfortable that git wouldn’t arbitrarily lose my data and introduced me to some tools and commands, but it wasn’t a long term solution. Ultimately we fixed the problem I had at hand by making a local feature branch off master and popping my stash off there and committing to the feature branch. Now I’ve adopted the feature branch workflow.

I also resolved to stop living in fear! If git’s the way we’re going then I’m not going to sit helpless while our git master confuses me with his talk of squashing multiple commits to one before merging and how this command or that will either merge successfully onto the end of master or insert my change into the middle of the history and force everyone pulling to perform an out of order merge because of what I’ve done.

No. I’m going to own it. I’m going to master git and truly understand its workings so that I can navigate the history tree at will and bend commit points around my finger. So far I’m on chapter 3 of Git Pro and everything’s making good sense. It’s absolutely true that at its core git is straightforward. It’s just these random edge cases that seem to pop up with some frequency. Like one of my other coworkers on a Mac updated and git got upset because someone had change the name of a file and just changed its case, so the case insensitive Mac and git didn’t get along. Git was warning that it noticed an unversioned change to the file, if I recall, and after trying a bunch of things on master it just would not give up. I wasn’t there when they found the solution but it involved switching to a branch and resolving the problem there then merging back onto master, I believe. Mental note: Double check what happened there.

Anyway, soon git shall call me Master…. Muahahaha!

On the Network

| Comments


Tags: general

I went to my first development event, the Boston Post Mortem and it was a great experience. I’ve finally gotten my own project to the point where I feel confident in it and in my ability to complete it, so I didn’t feel like a complete poseur talking to people about it. The post mortem was 85% networking/meetup and 15% post mortem, which balanced out quite well. I didn’t know anyone there but I managed to walk myself into plenty of conversations and get introduced to people so I call it a huge success. The night could easily have been me standing around awkwardly in a bar staring at people talking to each other. Everyone I met there was really nice and enthusiastic so now I’m looking forward to going to more!

Star Trek: Into Darkness Was Terrible…

| Comments


Tags: entertainment movies

Yes, it has been out for a long time now, even on video, but I finally got around to seeing Star Trek: Into Darkness. Plenty of other people have already commented on this, but I feel like it deserves yet another voice. The movie’s director openly stated that he didn’t like Star Trek and that he found it too philosophical. Philosophy is at the heart of Star Trek, so he pretty clearly wasn’t the right choice to create this film. A true professional could get over themselves and embrace it and add their own twist to it. J.J. Abrams clearly cannot. I can happily accept a Star Trek that’s more action packed, but it ended up being a bad fanfic. It was a completely generic scifi action movie with a Star Trek skin. I can enjoy truly great movies of pretty much any genre because anything so spectacularly made exceeds the bounds of its fan base. I like scifi, and I like Star Trek, and I like action, but I do not like Star Trek: Into Darkness.

Unfortunately it seems as though others disagree with me so I expect we’ll see more of this successful trash in the future. I’d week for the future of Star Wars if Lucas hadn’t already trashed it. :)

Back to Blogging

| Comments


Tags: news general

It was over a year ago that I set up this site and posted my first blog post to it…. And the last. I’m back to it now with a new angle. Instead of trying to make every post super in-depth I’ll just post whatever comes to mind. I often run into interesting problems and solutions that aren’t a huge deal, but I’ll forget them if I don’t write them down somewhere. As long as I’m doing that I might as well share them with the world.

My original site was generated using a static site generator that I wrote and while it worked well I didn’t want to spend the time to make it pretty… So now I’m using Octopress which is very similar but comes with some nice css templates. I’m using Robby Edward’s Octopress Tag Pages plugin to get topic tags since for some reason Octopress/Jekyll don’t have that feature out of the box. Static site generation is definitely the way to go.

The Tech Behind SimRacingDan’s Broadcast Videos

| Comments


Tags: programming gaming sim racing

I’ve been making videos of the races I participate in for a while and posting them to my youtube channel but I was getting a little bored and frustrated with them. If I had a great, action packed race it was OK but if I didn’t then I had to sit through the entire video (which could be up to 2 hours long) and try to say something interesting while I spend 10 laps nowhere near my competition either in front or behind. Plus the videos were all about me, me, me… Which is fine if you’re Stephen Colbert but I’m not so into it. I wanted to incorporate everyone in the league because really the awesome racing wouldn’t be possible without their participation.

On-board camera from a broadcast video of ISRA League Season 12 at Mid Ohio

iRacing has an API to stream out live telemetry and allow someone to programatically control which camera to use and who to point it at, and the api sdk came with 2 excellent sample programs: 1 that writes the telemetry stream to a csv file, and one that shows how to control the camera. All I had to do was a whole lot of work.

I figured there would be 3 programs I’d have to write: – One to read the telemetry csv, analyze it, pick the best parts, and write out a control script for … – One to read the control script and control the camera while the sim plays a replay of the race with FRAPS recording the graphics to video files – And one to draw a broadcast-style overlay with a clock timing down and standings scrolling across the top.

I started with the analyzer, which I called Ranalyzer- Race Analyzer… See? Clever. My initial approach was to detect every ‘interesting’ thing I could based on the data available. Speaking of the data available, here’s all I know about any car at any moment:

  • What lap it’s on
  • Its position on track as a % of total lap length
  • The surface type beneath the car: On track, off track, entering or leaving pits, in pits.

That’s it. Using the track length and based on the data for the previous timeframes I can determine speed and I know whether it’s accelerating or decelerating. I can also figure out what the current standings are.

The interesting events I detected were: – Off track – Slow car detected by analyzing the average lap time for the car class, discarding the top and bottom 10% – Stopped car – In pits – Side-by-side racing – Close racing (Meaning like .5 seconds between) – Passes – detected by a change in standings from one frame (1/60th of a second) to the next

And so the Ranalyzer analyzed the replay telemetry and wrote out a script of interesting events. My initial thought was that it’d be helpful to the folks in the league who do the writeups of the races. I noticed some problems, though. For example it kept incorrectly identifying passes where there weren’t any. I investigated and discovered that there were lots of blocks of missing data – the car position % was -1 for a bunch of time for lots of cars.

I then did a bunch of work on the ranalyzer and camera controller so I was then able to write a script for the camera controller to show just the interesting parts of the race, but then I had to decide what the interesting parts of the race actually were. I made the Ranalyzer identify and string together frames of ‘Event’s of different types. For example if a car was off track for a second it would build an Event with that car with the start and stop time, and each Event is scored according to an interest level that I cooked up. Passes were the most interesting Events, with off track and slow being on the lower side of interesting.

With the events generated it then placed them on a timeline of what to actually show. I spent a long time trying to figure out how to make it show interesting stuff for a decent amount of time. I ended up putting a cap of 1 minute on and it’d scan through and place the most interesting events on the timeline for as long as the lasted, or up to a minute. This ended up being a pretty lousy way to do it because it just wasn’t showing the most interesting stuff all the time.

I did a pretty heavy rewrite that changed the way I thought of events. Previously they had start and stop times and everything was as accurate as my data stream (1/60 of a second). I also rethought what was interesting. For example, off track ended up being a bit boring so I dropped it, and passing and side-by-side were extraneous because they were already covered by the close racing event type. I simply gave a higher score to it the closer the racing. I also reduced the accuracy to 1 per second which allowed me to simplify the job of figuring out what to watch at any time. Events no longer spanned multiple seconds but now were each their own discrete unit assigned to the 1-second slot in which they occurred. I also added a race finish event. I also changed the way it picked what to look at by scanning through the entire eventline, picking the highest valid (meaning the cars involved don’t go -1 on the data stream) and putting it on the timeline for up to 20 seconds. Rinse and repeat that until there are no gaps on the timeline of 20 seconds. Then merge the events on either side together where there are gaps.

The final piece needed was to use the recorded replay video and the script from the Ranalyzer to draw the overlay over the video with the clock, running standings, driver tags when the video goes on-board, and the points standings before and after the race. Chuck Chambliss put together the graphics and I took his images and started cutting them up for use. I found a .net library for reading and writing AVIs, but it only works on files under 2GB so had to chop the race video into pieces and compress them down. The overlayer took time to write, but seemed to going pretty promisingly. The only thing I wasn’t completely happy with was that for all the graphics I had 1 file for the graphic and 1 file with a transparency mask because the alpha channel didn’t appear to be available in .net Bitmaps. Things were coming together, but the graphics looked a bit crappy. It was also very, very slow. As in it’d take 20 times the race length or more to overlay it. 2 days of solid processing wasn’t going to cut it. I searched for a solution to the alpha problem and discovered LockedBitmap, which was exactly what I wanted! Faster access AND the alpha channel! I was able to throw out all the mask images and just use the alpha channel in the graphic. I wasn’t sure exactly how to do alpha blending but I threw together an algorithm that combines the color channel values in ratios of the alpha channel and it worked perfectly. I also discovered that running in Release mode rather than Debug makes it go around 3x-5x faster.

Oh wait, I forgot. I did need to write one more program – the viewer. The race viewer is a simple Form window that loads the video and shows the standings of the top 12 cars in each class and their split from the leader and the car ahead, as well as whether they’re in the pits or laps down. It has a client mode and a server mode. I run the server mode and my fellow commentators connect to it in client mode. I have a button to start the video that triggers the video to start on the clients at the same time.

The technical challenge that I discovered when I got the viewer up and running was that the video length didn’t end up matching the audio length and it didn’t synch with the script. I think that happened because the framerate must not capture at 30fps all the time, so a couple frames are dropped here and there but it’s played back at 30fps. To compensate I built in a fudge factor to the viewer and overlayer that advances the clock a little every now and then. I also use Audacity to shorten the audio and experiment with different values until it’s in synch through the whole video.

That’s it. We record the commentary, I mix it in with the audio, and then render out the master to upload to youtube.

I’m constantly improving it, though. For example I just added the race flag color (green, yellow, white, checkered) to the running bar display as the background for the clock. This ended up being slightly tricky because that information isn’t available off the telemtry from a replay so I need to record telemetry from the live race as I drive it, then record a second copy off the replay because I need the telemetry to be in synch with the replay. There’s a slight timing discrepancy I discovered between the two, so the ranalyzer figures out what it is and compensates, getting the flag color from the race telemetry and everything else from the replay telemetry.

That lays the groundwork for better full course caution handling since our live admins in ISRA throw FCCs in the event of really bad accidents with cars immobile/flipped/etc. My next challenge is to make ranalyzer go back up to 1 minute from the caution throw and find all the stopped cars in that time period and show them prior to their stopping from a couple different angles, then write that out in such a way as to make the CameraController rewind, show the accident, then jump back to live video before the race goes green again. And there’s more after that, but I’ll take it one step at a time…