Tags: programming gaming sim racing
I’ve been making videos of the races I participate in for a while and posting them to my youtube channel but I was getting a little bored and frustrated with them. If I had a great, action packed race it was OK but if I didn’t then I had to sit through the entire video (which could be up to 2 hours long) and try to say something interesting while I spend 10 laps nowhere near my competition either in front or behind. Plus the videos were all about me, me, me… Which is fine if you’re Stephen Colbert but I’m not so into it. I wanted to incorporate everyone in the league because really the awesome racing wouldn’t be possible without their participation.
iRacing has an API to stream out live telemetry and allow someone to programatically control which camera to use and who to point it at, and the api sdk came with 2 excellent sample programs: 1 that writes the telemetry stream to a csv file, and one that shows how to control the camera. All I had to do was a whole lot of work.
I figured there would be 3 programs I’d have to write: – One to read the telemetry csv, analyze it, pick the best parts, and write out a control script for … – One to read the control script and control the camera while the sim plays a replay of the race with FRAPS recording the graphics to video files – And one to draw a broadcast-style overlay with a clock timing down and standings scrolling across the top.
I started with the analyzer, which I called Ranalyzer- Race Analyzer… See? Clever. My initial approach was to detect every ‘interesting’ thing I could based on the data available. Speaking of the data available, here’s all I know about any car at any moment:
- What lap it’s on
- Its position on track as a % of total lap length
- The surface type beneath the car: On track, off track, entering or leaving pits, in pits.
That’s it. Using the track length and based on the data for the previous timeframes I can determine speed and I know whether it’s accelerating or decelerating. I can also figure out what the current standings are.
The interesting events I detected were: – Off track – Slow car detected by analyzing the average lap time for the car class, discarding the top and bottom 10% – Stopped car – In pits – Side-by-side racing – Close racing (Meaning like .5 seconds between) – Passes – detected by a change in standings from one frame (1/60th of a second) to the next
And so the Ranalyzer analyzed the replay telemetry and wrote out a script of interesting events. My initial thought was that it’d be helpful to the folks in the league who do the writeups of the races. I noticed some problems, though. For example it kept incorrectly identifying passes where there weren’t any. I investigated and discovered that there were lots of blocks of missing data – the car position % was -1 for a bunch of time for lots of cars.
I then did a bunch of work on the ranalyzer and camera controller so I was then able to write a script for the camera controller to show just the interesting parts of the race, but then I had to decide what the interesting parts of the race actually were. I made the Ranalyzer identify and string together frames of ‘Event’s of different types. For example if a car was off track for a second it would build an Event with that car with the start and stop time, and each Event is scored according to an interest level that I cooked up. Passes were the most interesting Events, with off track and slow being on the lower side of interesting.
With the events generated it then placed them on a timeline of what to actually show. I spent a long time trying to figure out how to make it show interesting stuff for a decent amount of time. I ended up putting a cap of 1 minute on and it’d scan through and place the most interesting events on the timeline for as long as the lasted, or up to a minute. This ended up being a pretty lousy way to do it because it just wasn’t showing the most interesting stuff all the time.
I did a pretty heavy rewrite that changed the way I thought of events. Previously they had start and stop times and everything was as accurate as my data stream (1/60 of a second). I also rethought what was interesting. For example, off track ended up being a bit boring so I dropped it, and passing and side-by-side were extraneous because they were already covered by the close racing event type. I simply gave a higher score to it the closer the racing. I also reduced the accuracy to 1 per second which allowed me to simplify the job of figuring out what to watch at any time. Events no longer spanned multiple seconds but now were each their own discrete unit assigned to the 1-second slot in which they occurred. I also added a race finish event. I also changed the way it picked what to look at by scanning through the entire eventline, picking the highest valid (meaning the cars involved don’t go -1 on the data stream) and putting it on the timeline for up to 20 seconds. Rinse and repeat that until there are no gaps on the timeline of 20 seconds. Then merge the events on either side together where there are gaps.
The final piece needed was to use the recorded replay video and the script from the Ranalyzer to draw the overlay over the video with the clock, running standings, driver tags when the video goes on-board, and the points standings before and after the race. Chuck Chambliss put together the graphics and I took his images and started cutting them up for use. I found a .net library for reading and writing AVIs, but it only works on files under 2GB so had to chop the race video into pieces and compress them down. The overlayer took time to write, but seemed to going pretty promisingly. The only thing I wasn’t completely happy with was that for all the graphics I had 1 file for the graphic and 1 file with a transparency mask because the alpha channel didn’t appear to be available in .net Bitmaps. Things were coming together, but the graphics looked a bit crappy. It was also very, very slow. As in it’d take 20 times the race length or more to overlay it. 2 days of solid processing wasn’t going to cut it. I searched for a solution to the alpha problem and discovered LockedBitmap, which was exactly what I wanted! Faster access AND the alpha channel! I was able to throw out all the mask images and just use the alpha channel in the graphic. I wasn’t sure exactly how to do alpha blending but I threw together an algorithm that combines the color channel values in ratios of the alpha channel and it worked perfectly. I also discovered that running in Release mode rather than Debug makes it go around 3x-5x faster.
Oh wait, I forgot. I did need to write one more program – the viewer. The race viewer is a simple Form window that loads the video and shows the standings of the top 12 cars in each class and their split from the leader and the car ahead, as well as whether they’re in the pits or laps down. It has a client mode and a server mode. I run the server mode and my fellow commentators connect to it in client mode. I have a button to start the video that triggers the video to start on the clients at the same time.
The technical challenge that I discovered when I got the viewer up and running was that the video length didn’t end up matching the audio length and it didn’t synch with the script. I think that happened because the framerate must not capture at 30fps all the time, so a couple frames are dropped here and there but it’s played back at 30fps. To compensate I built in a fudge factor to the viewer and overlayer that advances the clock a little every now and then. I also use Audacity to shorten the audio and experiment with different values until it’s in synch through the whole video.
That’s it. We record the commentary, I mix it in with the audio, and then render out the master to upload to youtube.
I’m constantly improving it, though. For example I just added the race flag color (green, yellow, white, checkered) to the running bar display as the background for the clock. This ended up being slightly tricky because that information isn’t available off the telemtry from a replay so I need to record telemetry from the live race as I drive it, then record a second copy off the replay because I need the telemetry to be in synch with the replay. There’s a slight timing discrepancy I discovered between the two, so the ranalyzer figures out what it is and compensates, getting the flag color from the race telemetry and everything else from the replay telemetry.
That lays the groundwork for better full course caution handling since our live admins in ISRA throw FCCs in the event of really bad accidents with cars immobile/flipped/etc. My next challenge is to make ranalyzer go back up to 1 minute from the caution throw and find all the stopped cars in that time period and show them prior to their stopping from a couple different angles, then write that out in such a way as to make the CameraController rewind, show the accident, then jump back to live video before the race goes green again. And there’s more after that, but I’ll take it one step at a time…