@gorlak's dev blog2021-07-13T16:35:48+00:00http://gorlak.devGeoff EvansModerating Technical Discussions2019-03-21T00:00:00+00:00http://gorlak.dev/conferences/2019/03/21/moderating-technical-discussions
<p>I fell in love with Technical Issues in Tools Development roundtable at GDC in the late 2000s. It was the only community I could find about building tools and tool infrastructure.</p>
<p>When the fantastic John Walker departed I took the mantle of moderating. Soon after, GDC asked me to join the Advisory Board. This first year on the board I volunteered to provide feedback to the rest of the roundtables in the Programming track.</p>
<p>This post is the result of the thoughts I had attending roundtables (mainly in the Programming track) for the whole of GDC 2019 Main Conference.</p>
<p>Let’s start at the top…</p>
<h3 id="what-is-a-roundtable">What is a Roundtable?</h3>
<p><strong>A roundtable is a discussion whose topics are driven by the interests of the participants, and whose substance is relevant to the majority of the room.</strong></p>
<p>Participants have ample opportunity to:</p>
<ul>
<li>Propose new topics</li>
<li>Make comments on a topic</li>
<li>Ask follow-up questions while a topic is being discussed</li>
</ul>
<p><strong>The Moderator is an expert in the subject, but does not address topics directly.</strong></p>
<p>The Moderator’s expertise is mainly useful to:</p>
<ul>
<li>Clarify or reframe questions and topics into something the room can address</li>
<li>Generalize overly specific remarks into something valuable to the room</li>
<li>Specifically call upon known participants when necessary</li>
</ul>
<p>That last bullet is a bit tricky. Most of the roundtables I moderate do tend to have a panel of experts that attend repeatedly. It can be useful to push these experts to chime in on topics when you know they can speak about them.</p>
<h3 id="moderating-vs-participating">Moderating vs. Participating</h3>
<p><strong>Participants are there to talk to other participants or the room as a whole. Participants aren’t there to talk to The Moderator.</strong></p>
<p>Roundtables shouldn’t be like Q & A sessions with a lecturer. You may have people raising hands to speak, but they aren’t <em>really</em> wanting to speak to The Moderator. They are raising their hands to address the room. The Moderator needs to behave in a way that maximizes the room thinking they are the ones being addressed, collectively, by whomever happens to be speaking.</p>
<p>Its really important that The Moderator take their ego out of the discussion as much as possible. They are a facilitator by default, and not a participant. Seamlessly jumping back and forth between both roles can expose bias toward The Moderator’s viewpoint. Participants could also become frustrated if they feel that The Moderator got more opportunities to express their opinion than they did.</p>
<p>Every roundtable discussion is a tiny community, and it should be an equitable one, where participants feel they are treated fairly.</p>
<p><strong>The Moderator can become a participant at several points in the discussion, but this change should be evident to the participants.</strong></p>
<p>I like to raise my own hand, and point at myself with the other hand to send a clear message to the room that I am switching from being The Moderator to being a participant. This is usually a mildly entertaining moment that helps keep your audience engaged; an opportunity to put some humor into the room (which helps people relax). Cues like this are worth doing because the participants will appreciate the equitable treatment of time between all participants.</p>
<h3 id="promoting-the-good">Promoting The Good</h3>
<p><strong>The Moderator keeps the discussion quickly flowing directly to participants that want to speak.</strong></p>
<p>The most important factor for keeping a discussion lively is lowering the latency between speakers. The Moderator should be looking to lower the time interval between someone finishing their point, and the next person beginning to speak. The lower the time interval the higher velocity the overall discussion will have. The higher velocity the more takeaways participants will get. Higher velocity means higher engagement, too.</p>
<p>To accomplish this The Moderator is constantly doing many things at once:</p>
<ul>
<li>Listening to the active speaker and verifying relevance, speed, and volume</li>
<li>Thinking about when and who will speak next on the current point</li>
<li>Thinking about what the next topic is and when to switch to it</li>
</ul>
<p>Its vital for The Moderator to be thinking about what the next moderating event will be. These events are moving to the next speaker, changing topics, signaling the current speaker to wrap up, etc… The Moderator should be looking to overlap acquisition of the next participant to speak with the current speaker. This is so that no time is wasted when the active speaker changes. Dead air between speakers is a significant contributor to distraction and boredom.</p>
<p><strong>The Moderator should physically focus on the non-speaking participants more than the current speaker.</strong></p>
<p>In addition to the above, The Moderator is <em>also</em> constantly doing these things:</p>
<ul>
<li>Looking around the room for a person signaling that they want to speak</li>
<li>Looking around the room for people that can’t hear or are distracted</li>
<li>Moving around the room, looking for new sight lines, and making eye contact</li>
</ul>
<p>Its important that The Moderator be mostly <em>not</em> looking at who is speaking at any given moment. Upon breaking gaze, the speaker will almost always switch to addressing the most relevant participant in the room. This is encouraged because the current speaker <em>should</em> be addressing other participants or the room as a whole, and not talking specifically to The Moderator.</p>
<p>Its important to move around the room during the discussion. This helps with ensuring no voice is omitted because of difficult sight lines, and it ensures that you <em>know</em> for sure that speakers can be heard on the opposite side of the room. You sometimes cue distracted participants to pay attention by making eye contact while walking amongst the participants.</p>
<h3 id="mitigating-the-bad">Mitigating The Bad</h3>
<p><strong>Prevent a single point of view from dominating the conversation.</strong></p>
<p>Most things worth discussing have multiple approaches/viewpoints, and if folks that represent multiple viewpoints are in the room they should have the opportunity to have a voice. More concretely: some people will want to address many different topics multiple times over. At some point these people should be forced to yield to others in the room to keep the conversation fresh.</p>
<p>The longer the discussion commences, The Moderator should bias toward choosing new/seldom speakers to chime in on topics. In the long run this reinforces that people should think carefully about which topics they have enough value to warrant speaking on.</p>
<p><strong>Fight distraction and daydreaming by reminding the participants about the current topic.</strong></p>
<p>People <em>will</em> check their phone, use their laptop, etc… and lose track of the current topic. Once they are paying attention again The Moderator should be making shout outs to what the current topic is so they can re-engage.</p>
<p>Its also useful to do this if a particular topic isn’t getting responses as well: reiterate/generalize the current topic until its clear there is something to say about it. There is always something to say about every topic, and reminding the audience that as a whole its failing to accomplish that <em>invariably</em> will get people to chime in.</p>
<p><strong>Use microphone/public-address systems sparingly.</strong></p>
<p>Running mics around the room introduce some difficult challenges, they:</p>
<ul>
<li>Increase latency between speakers (which contribute to dead air between topics and speakers)</li>
<li>Contribute to people mumbling more (asking people to speak up and project their voice helps this)</li>
<li>Depending on the room and speaker placement it can add additional reverb to someone’s voice (more audio sources and bounces)</li>
</ul>
<p>So while mics make people louder that doesn’t always net out in making people easier to actually understand, and it makes moderating effectively a bit harder.</p>
<h3 id="technical-issues-in-tools-development-format">Technical Issues in Tools Development Format</h3>
<p>Provided mostly for reference, here are the de-personalized notes I use to describe the format at the beginning of each session.</p>
<p>Reminders</p>
<ul>
<li>Remind people to fill out session review forms</li>
<li>Turn cell phones on vibrate</li>
</ul>
<p>Introductions</p>
<ul>
<li>Introduce yourself and note public profiles (twitter, facebook, etc)</li>
<li>List work history, why you deserve to moderate the discussion</li>
<li>Note any online community resources people may want to join</li>
</ul>
<p>Format</p>
<ul>
<li>I will call around the room for topics write them on the whiteboard</li>
<li>We will work our way through them over the course of the session</li>
<li>We might won’t make it through each topic, sorry if we don’t get to yours</li>
<li>We will bias topics for this particular session towards <em>X</em></li>
<li>I will note relevant topics on the top, and the less relevant on the bottom</li>
<li>Time permitting we can discuss less relevant or even new topics, so keep thinking</li>
<li>I may cut off speakers that are taking a lot of time, sorry about that</li>
<li>Please raise your hand and make eye contact with me if you want to speak</li>
<li>Keep your hand up as I will scan the room while speakers are talking</li>
<li>I will nod in acknowledgement that you want to speak, and may come back to you</li>
<li>The first time you speak please state your name and studio/company</li>
<li>I may gesture you to speak up by flailing my arms</li>
</ul>
<p>Issue the call for topics.</p>
<p>Begin discussing the first topic.</p>
Shipping Features Responsibly2019-03-18T00:00:00+00:00http://gorlak.dev/conferences/2019/03/18/shipping-features-slides
<hr />
<h3><a href="/assets/shipping-features.pdf">Here is the PDF of my deck from the talk!</a></h3>
<hr />
<h3 id="agenda">Agenda</h3>
<p>“With great power comes great responsibility”
-Voltaire / Spider-Man / Stan Lee</p>
<ul>
<li>This talk gives high level best practices of releasing new features</li>
<li>All features don’t necessarily need to follow all guidelines</li>
<li>Any feature needs some combination of these</li>
</ul>
<hr />
<h3 id="me">Me</h3>
<ul>
<li>Gamedev Tools Engineer since 2003</li>
<li>Infinity Ward, Kojima Productions, Insomniac</li>
<li>Leading a tools team at IW
<ul>
<li>We are hiring, and having a party tonight!</li>
<li><em>tinyurl.com/toolshappyhour2019</em> for details</li>
</ul>
</li>
<li>History in revision control, branching, file formats</li>
<li>Built asset editors, level editors, tools frameworks</li>
</ul>
<p>^ But this talk isn’t really about me it’s mostly about…</p>
<hr />
<h3 id="my-evil-twin">My Evil Twin</h3>
<ul>
<li>Extensive experience in <em>breaking</em> artists & designers</li>
<li>I’ve halted entire studios’ progress for hours</li>
<li>I’ve degraded artists’ productivity for days</li>
<li>I’ve released tools that are:
<ul>
<li>undertested (buggy)</li>
<li>underdesigned (don’t work in practice)</li>
<li>overdesigned (confusing/too many features)</li>
</ul>
</li>
</ul>
<p>^ No details b/c of Vault, talk to me later tonight</p>
<p>^ All this qualifies me to lecture you about how to not be like me, and waste people’s time</p>
<hr />
<p><img src="/assets/shipping-features-pending-changelist.gif" alt="inline" /></p>
<p>^ You may or may not have consciously chosen to be here</p>
<p>^ You have written some code, decided things should be different</p>
<p>^ Or maybe you are following up on a feature request opened by a user</p>
<hr />
<p><img src="/assets/shipping-features-stop-sign.jpg" alt="inline" /></p>
<p>^ Stop right there, buddy.</p>
<p>^ You almost definitely aren’t ready to pull the trigger.</p>
<p>^ I’m here to tell you all the things you need to do (to not be like me).</p>
<hr />
<h3 id="1---question-your-design">1 - Question Your Design</h3>
<ul>
<li>Have you <em>actually</em> done all the design work to ship this feature?
<ul>
<li>Remember: <em>impact</em> usually surpasses <em>intent</em></li>
<li>Take a step back and make <em>intent</em> crystal clear</li>
</ul>
</li>
<li>What problem are you trying to solve?
<ul>
<li>Is this something that only bothers (or makes sense to) <em>you</em>?</li>
<li>Imagine giving an “elevator pitch” of explaining this design</li>
<li>Do you sound like a crazy person?</li>
<li>Is this a waste of time/effort?</li>
</ul>
</li>
</ul>
<p>^ Try your best to talk yourself out of making the change</p>
<p>^ It’s perfectly ok to start a lot of changes and walk away!</p>
<p>^ When in doubt: put it on ice and move on to the next thing</p>
<hr />
<h3 id="2---estimate-the-impact">2 - Estimate The Impact</h3>
<ul>
<li>Have you considered how it will impact:
<ul>
<li>Every workflow permutation (use case)?</li>
<li>How do you know? Have you searched the code (GREP)?</li>
<li>How will offsite staff and outsource vendors work?</li>
<li>Downstream projects (teams offset in time)?</li>
<li>Fellow engineers (different from users)?</li>
</ul>
</li>
</ul>
<p>^ A good time to do some “Rubber Duck Debugging”-style explainer of the change</p>
<p>^ A preliminary code review to help make the impact knowable</p>
<p>^ Just looking to understand the scope of your change</p>
<hr />
<h3 id="3---perform-and-document-your-testing">3 - Perform And Document Your Testing</h3>
<ul>
<li>
<p>The bigger the impact, the more testing you need</p>
</li>
<li>Be greedy with automated tests
<ul>
<li>No cheating: actually wait for them to finish!</li>
</ul>
</li>
<li>Critical code paths should be stepped through in the debugger
<ul>
<li>Don’t end up saying “How did this <em>ever work</em>?”</li>
</ul>
</li>
<li>Document testing you have done in change comment
<ul>
<li>Adds value to the code review process</li>
</ul>
</li>
</ul>
<p>^ Remind code reviewers that your code works by stating tests</p>
<p>^ They aren’t strictly looking for bugs, evaluating your approach</p>
<p>^ If tests are documented reviewers may think of tests you are missing</p>
<hr />
<h3 id="4---measure-and-document-your-performance">4 - Measure And Document Your Performance</h3>
<ul>
<li>
<p>Is it faster? Is it slower? By how much?</p>
</li>
<li>
<p>What is the bottleneck? Did you change it?</p>
</li>
<li>Have you measured the impact on resources?
<ul>
<li>CPU: are you wasting CPU cycles needlessly?</li>
<li>Memory: what about going wider on many-core CPUs?</li>
</ul>
</li>
<li>Disk Space
<ul>
<li>Have you considered how disk caches will be evicted?</li>
<li>How will you know if eviction has a bug?</li>
</ul>
</li>
<li>Server Load
<ul>
<li>Do you have caching (or rate limiters) on your key API routes?</li>
</ul>
</li>
</ul>
<p>^ You should know and document all these things</p>
<p>^ Take credit for speed improvements, and justify perf hits</p>
<p>^ Demonstrate through documentation that you have thought it through</p>
<hr />
<h3 id="5---prepare-for-failure">5 - Prepare For Failure</h3>
<ul>
<li>What does the worst possible failure look like?
<ul>
<li>How (and when) will you know if this is happening?</li>
</ul>
</li>
<li>How easy is this change to roll back?
<ul>
<li>Are there costs that make roll back painful?</li>
<li>If so, what is the commensurate change in testing?</li>
<li>Should you launch this feature as a features toggle (setting)?</li>
</ul>
</li>
<li>Are you <em>actually</em> prepared to roll back if it causes a problem?
<ul>
<li>What time is it <em>right now</em>?</li>
<li>When do you plan on leaving work?</li>
</ul>
</li>
<li>How big of a problem are you willing fixing w/o roll back?</li>
</ul>
<p>^ The best practice for releasing a change is to roll back at first sight of defect</p>
<p>^ Engineers, however, are in the headspace of thinking through the technology</p>
<p>^ Roll back first trains your muscle memory to respect others’ time more than your own</p>
<p>^ Train yourself to roll back at the first sign of trouble</p>
<hr />
<h3 id="6---update-user-documentation">6 - Update User Documentation</h3>
<ul>
<li>
<p>Have you searched for documentation that needs updating?</p>
</li>
<li>
<p>There may be many places where “documentation” exists…</p>
<ul>
<li>Wiki</li>
<li>Code comments</li>
<li>Documents in revision control</li>
<li>Recent message threads about an issue</li>
</ul>
</li>
</ul>
<p>^ Documentation falls prey to broken windows effect, fight it</p>
<p>^ Pointing to relevant documentation when people ask should feel great!</p>
<hr />
<h3 id="7---send-good-notifications">7 - Send Good Notifications</h3>
<ul>
<li>Now the feature is release, it’s time to tell people!
<ul>
<li>Always TLDR in your emails</li>
<li>Provide details for technical stakeholders (maybe separately)</li>
<li>If the performance wins/costs are substantial, tell people</li>
<li>Include images, GIFs, videos: eye candy helps!</li>
</ul>
</li>
<li>Consider and advise about issues you may expect
<ul>
<li>Include some advice for fixing potential issues</li>
<li>Be on call to answer replies within a time window</li>
</ul>
</li>
</ul>
<p>^ This is tricky to get right</p>
<p>^ Good communication is easy to lose and hard to gain</p>
<p>^ You have to make it short and simple to have a shot at keeping people reading</p>
<hr />
<h3 id="8---do-the-follow-up">8 - Do The Follow Up</h3>
<ul>
<li>Contact users that are meant to benefit from the change
<ul>
<li>Did they actually pay attention to your notification?</li>
<li>Did they actually benefit as you intended?</li>
<li>Did they experience a bug, and forget to tell someone?</li>
</ul>
</li>
</ul>
<p>^ This is critical to you improving as a developer</p>
<p>^ Any missed communication at this stage means you need new procedures</p>
<p>^ Remember that different users receive communcation in different ways</p>
<p>^ You may need to reiterate the same information multiple ways</p>
<hr />
<h3 id="9---post-mortem-yourself">9 - Post Mortem Yourself</h3>
<ul>
<li>How many times did you have to roll back?
<ul>
<li>How are roll backs trending?</li>
<li>What can you do to decrease it?</li>
</ul>
</li>
<li>Verify with people randomly:
<ul>
<li>Your TLDR was short enough</li>
<li>Your details are pertinent</li>
</ul>
</li>
</ul>
<p>^ I like to do this in the kitchen, just ask if people saw and understood my notes</p>
<hr />
<p>Review:</p>
<ol>
<li>Question Your Design</li>
<li>Estimate The Impact</li>
<li>Perform And Document Your Testing</li>
<li>Measure And Document Your Performance</li>
<li>Prepare For Failure</li>
<li>Update User Documentation</li>
<li>Send Rich Notifications</li>
<li>Do The Follow Up</li>
<li>Post Mortem Yourself</li>
</ol>
<p>^ Now you are actually done shipping your feature. It’s a <em>lot</em> of work:</p>
<hr />
<p>[.build-lists: false]</p>
<h3 id="questions">Questions?</h3>
<p>Twitter: @gorlak</p>
<p>Community:</p>
<ul>
<li>Twitter: @thetoolsmiths</li>
<li>Website: thetoolsmiths.com</li>
<li>Chat: thetoolsmiths.slack.com</li>
</ul>
<p>Call of Duty Happy Hour: <em>tinyurl.com/toolshappyhour2019</em></p>
<p>See you later tonight!</p>
Tools Engineer Recruiting FAQ2018-07-13T00:00:00+00:00http://gorlak.dev/recruiting/2018/07/13/tools-engineer-faq
<p>I recently wrote a document to help talent through our recruiting pipeline at Infinity Ward. It’s come up in <a href="https://thetoolsmiths.org">the slack</a> to positive reviews, so I’ll repost it here. I wrote this because I felt I could pre-answer many questions about how the team is setup within the studio, and ease in talent from outside the game industry a bit better.</p>
<p>Before prepping this document I got a candidate or two that wasn’t really clear on what the tools discipline is. Presenting a document that lays this out streamlined our company process and respects the applicatant’s time better.</p>
<hr />
<p><em>Greetings!</em></p>
<h2 id="introduction">Introduction</h2>
<p>If you are reading this your resume/profile has been viewed and determined to be a possible overlap with the Tools team at Infinity Ward! Tools work may or may not overlap with your background, or perhaps not even where your current interests lay, but this document has been written to clarify things enough so that you can decide if a role on the team is right for you. Fit is very important to our team, so we put this document together to provide a better view of what Tools Engineering is at Infinity Ward.</p>
<h3 id="what-is-a-tools-engineer">What is a Tools Engineer?</h3>
<p>A Tools Engineer is a software engineer that supports the development of the content and code for the development of the game (instead of the game itself). They work to improve the rate-of-change at which the game can be built, to improve the quality of the game, and enhance the user experience of creating the game. Engineers that gravitate toward tools development tend to want to:</p>
<ul>
<li>Analyze workflow and improve the usability of the tools that designers and artists use</li>
<li>Improve the efficiency at which changes can be shown in the game</li>
<li>Take measurements and communicate defects and inefficiencies in the game</li>
<li>Support studio services that accelerate and supply information about the tools</li>
<li>Improve the productivity of fellow engine and game engineers</li>
</ul>
<hr />
<h2 id="what-are-the-different-disciplines-within-tools-engineering">What are the different disciplines within Tools Engineering?</h2>
<p>The Tools team at Infinity Ward doesn’t mandate that people choose and only focus on one discipline. Instead, it’s meant to help identify which areas of tools development that team members find most gratifying. There are frequent opportunities to do work in a discipline that might be outside your comfort zone. Domain knowledge is deep within each discipline, so knowing/learning/changing which discipline is most interesting to a team member can help ensure that they stay happy and focused.</p>
<h3 id="content-editing-workflow">Content Editing Workflow</h3>
<p>Content editor work focuses on the traditional Windows desktop style application engineering. This development is two-fold:</p>
<ul>
<li>Constant auditing and pursuing high quality and efficient user-experience of the content editing tools</li>
<li>Improving the state of the underlying software architecture of those content editing tools</li>
</ul>
<p>Skills and Concepts:</p>
<ul>
<li>User experience auditing and content creator communication/feedback</li>
<li>User interface toolkits (such as Qt, wxWidgets, WPF, WinForms, FLTK, IMGUI, etc…)</li>
<li>Document formats (such as XML, JSON, YAML, etc…)</li>
<li>Revision control integration (such as Perforce, Plastic SCM, etc…)</li>
<li>File system watchers and document hot-reloading</li>
<li>Interprocess communication and shared object synchronization</li>
</ul>
<h3 id="content-build-pipeline">Content Build Pipeline</h3>
<p>Content build pipeline work focuses on the asset pipeline and all its data processing technology. The build pipeline is the collective noun for all of the tools that process data into the form that is loadable by the game (both the development version of the game as well as the final version that goes on the disc).</p>
<p>Skills and Concepts:</p>
<ul>
<li>Build systems (such as Make, MSBuild, Jam, FASTbuild, Ninja, etc…)</li>
<li>Dependency analysis (modification time, checksum)</li>
<li>Pipeline auditing for iteration throughput</li>
<li>Parallelism and concurrency</li>
<li>Global optimization</li>
<li>Determinism</li>
</ul>
<h3 id="reliability--infrastructure">Reliability & Infrastructure</h3>
<p>Reliability work focuses on studio services that support all engineering, content creation, and the at-large process of facilitating change over the course of production.</p>
<p>Skills and Concepts:</p>
<ul>
<li>Continuous Integration (in its traditional definition: the folding of changes against each other for early detection of integration test failure)</li>
<li>Continuous Validation (automatic validation as change commences within revision control)</li>
<li>Computer Configuration-As-Code (such as Puppet, Ansible, etc…)</li>
<li>Implementing and honing the reporting of the telemetry implementation within the studio</li>
<li>Health monitoring and data corroboration of key services like caching servers (CIFS, Redis, memcachd, etc…) as well as distributed build systems (SN-DBS, IncrediBuild, etc…)</li>
</ul>
<hr />
<h2 id="what-is-the-application-process">What is the application process?</h2>
<p>The process has three steps:</p>
<h3 id="step-1-one-hour-phone-interview-with-the-team-leader">Step 1: One hour phone interview with the team leader</h3>
<p>Expect the call to take up to an hour, but it can be as short as 30 minutes. A shorter call doesn’t mean you necessarily did worse (or better). The call consists of three parts:</p>
<ul>
<li>The short ice-breaker discussion of the team, studio, and broader corporation (mostly answering any questions you may have beyond the scope of this document).</li>
<li>A technical “Skills Check” across three main topics: Computer Architecture, Native Language Programming, and Vector Mathematics. The skills check follows a standard regiment of questions regardless of seniority, and as such focuses on the fundamentals of each topic. Taking the time to answer these questions both helps us roughly judge where your strengths lie, and it gives us a sense for how you communicate about technical issues.</li>
<li>Reflecting upon the skills check and discussion about how your strengths and desired growth fit into the team.</li>
</ul>
<h3 id="step-2-take-home-programming-test">Step 2: Take home programming test</h3>
<p>The test is a mix of written questions and a couple of programming problems, and is open book (with citations and time keeping).</p>
<p>The written questions are designed to give us a sense of your written communication style, and to peek your opinion and experience on typical software development topics.</p>
<p>The programming problems typically take a couple hours and require you to think through some nontrivial (but hopefully interesting) design problems, and implement at least one fully functioning/tested solution.</p>
<h3 id="step-3-an-all-day-in-person-interview">Step 3: An all day in-person interview</h3>
<p>In-person interview days are, again, in two parts (so many things have two parts!!) and separated by a lunch break with team members:</p>
<p>The morning has two interview sessions with groups of Tools team members.</p>
<p>The afternoon has two interview sessions with groups of other engineers from the studio, and a final short interview with the studio CTO.</p>
<p>Each interview session has a break in between. Feedback is gathered as the interview day commences, so your day may be cut short if the fit isn’t there (why you interview with the Tools team first).</p>
<hr />
<h2 id="who-leads-the-tools-team-at-infinity-ward">Who leads the tools team at Infinity Ward?</h2>
<p>Geoff Evans is a veteran of many AAA studios’ tools teams, including:</p>
<ul>
<li>Insomniac Games (8 years)</li>
<li>Neversoft Entertainment (2 years)</li>
<li>Kojima Productions (2 years)</li>
<li>Infinity Ward (since 2015)</li>
</ul>
<p>If you have specific questions or concerns you can get in touch with him directly via <a href="https://twitter.com/gorlak">twitter</a></p>
<p>Geoff Evans was also the founder of The Toolsmiths, an online community of Tools Engineers. They mainly communicate through a Slack instance. Feel free to DM your email address to <a href="https://twitter.com/thetoolsmiths">@thetoolsmiths on twitter</a> for an invite, and join the discussion!
What are some resources for improving tools related skills?</p>
<p>You can read over a long list of skills and techniques that are good to know in <a href="https://gist.github.com/gorlak/1a0747efe88c5e3998144c5787d090ec">this gist</a></p>
<p>Also, there is a collection of GDC (and other sites) talks that go over interesting tools concepts in <a href="https://gist.github.com/gorlak/f69c84adf4d70b04aad9">this gist</a></p>
Insomniac Core Dump2016-08-06T00:00:00+00:00http://gorlak.dev/conferences/2016/08/06/insomniac-core-dump
<h3 id="10am-intro-to-day-1">10am Intro to Day 1</h3>
<ul>
<li>This conference was inspired by Handmade con, borrowed the formula from them. Mostly ad-hoc/interview style.</li>
<li>Video from Ted Price about stuff, sharing with the community. Thanks for the conference, Ted!</li>
<li>Mike answering questions submitted by attendees. Thoughts on small stuff.</li>
<li>All the videos will be posted online sometime after the conference wraps</li>
</ul>
<p>Insoniac Core Team structure:</p>
<ul>
<li>Syndicate team</li>
<li>Cinematics team</li>
<li>Animation team</li>
<li>Rendering team</li>
</ul>
<h3 id="1030am-interview-w-andreas-fredriksson">10:30am Interview w/ Andreas Fredriksson</h3>
<ul>
<li>Lead of Syndicate team (general tools team)</li>
<li>Fleet of memory allocators, several different heaps (bookeeping data internal and external – for GPU access)</li>
<li>Things missing from debuggers: write strides of memory to a file, full system debugging</li>
<li>Trials with transitioning to lead status from senior engineer</li>
<li>Demoscene and c64 programming w/ the team over holiday break</li>
</ul>
<h3 id="11am-interview-w-jonathan-garrett">11am Interview w/ Jonathan Garrett</h3>
<ul>
<li>Lead of Animation/Audio/Physics team</li>
<li>Node graph editor of animation driver, sounds intresting but a demo would be great</li>
</ul>
<h3 id="1130am-panel">11:30am Panel</h3>
<p>How does Insomniac manage build data?</p>
<p>Andreas F, Bob Sprentall, Jonathan Adamczewski</p>
<ul>
<li>Bob: The build aims it to be an invisible thing, no intervention</li>
<li>Bob: Each asset type has a builder tool (invoke that builder once to build all assets of that type? not sure)</li>
<li>Bob: Build system in front of builder, after the build runs once, no up front dependency registration</li>
<li>Adam: Recast navigation is globally optimized by loading all data to analyze spatial proximity</li>
<li>Andreas: No file i/o in the game, only request load of asset-id (probably from LunaServer), not file (a very good thing)</li>
<li>Andreas: Dependency graph of all the assets the engine has loaded -> key assets</li>
<li>Andreas: DG is stored off for disc mastering to know how to layout assets (archiver tool analyzes the data)</li>
<li>Mike: Asset-id is a hash, they see collisions semi-frequently</li>
<li>Andreas: Don’t use fios because DG loader has the info they need (serial loader working on known list?)</li>
<li>Mike: They use updater script to graduate/migrate source data to new schema on demand, old data stays old in the repository (I think)</li>
<li>Adam: They do have a cache, doesn’t sound content-hash addressable, fetches entries that might not be relevant (has deps inside it)</li>
<li>Andreas: No sparse syncing, cordoning off departmental data, no task-specific condons yet</li>
<li>Andreas: Feature development on the engine/tools take place on other branches</li>
<li>Bob: Branches are used to gate release to production</li>
</ul>
<h3 id="130pm-panel">1:30pm Panel</h3>
<p>Reflections on >10 years at Insomniac</p>
<p>Jonathan Garrett, Giac Veltri, Chris Edwards</p>
<ul>
<li>Giac: Manual build discrete asset types (mb, tb, lb) -> evolve to unified buildtool w/ dependency graph</li>
<li>Jonny: Iteration time and 5s turnaround time, particles</li>
<li>Chris: Asset tagging issues, not stored inside the actual assets, have to migrate separately, a misstep</li>
<li>Giac: Separate viewers for different asset types, a mixed bag</li>
</ul>
<h3 id="200pm-interview-w-dave-dimov">2:00pm Interview w/ Dave Dimov</h3>
<p>QA for tools/engineering group</p>
<ul>
<li>They do shadowing of users behavior with the tools, roll manual tests for quality</li>
<li>QA team members to help champion usability of the tools, but appears not clear owner for measuring usability</li>
<li>3 week release cycle into all titles, a branch tracks the engine in each title</li>
<li>Each game gets a full test pass through each code release</li>
</ul>
<h3 id="230pm-panel">2:30pm Panel</h3>
<p>Lessons Joining the Insomniac Engine Team</p>
<p>Vitor Menezes, Evan Hatch, Dale Kim</p>
<ul>
<li>A bunch of junior engineers being bewildered by fancy things (not my bag)</li>
<li>Some others had good takeaway, staff/senior team members at Insomniac especially</li>
</ul>
<h3 id="3pm-interview-w-abdul-bezrati">3pm Interview w/ Abdul Bezrati</h3>
<ul>
<li>Draw calls, batching, shader variation management</li>
<li>Frame breakdown, deferred w/ forward pass</li>
</ul>
<h3 id="430pm-interview-w-elan-ruskin">4:30pm Interview w/ Elan Ruskin</h3>
<ul>
<li>Callback to talk about forensic debugging</li>
<li>Callback to GDC talk about statistics</li>
<li>Physics and Havok rundown about how simulations don’t necessarily make the best gameplay, how to compromise reality for each title</li>
<li>The whole studio is making a game, and entertainment product, don’t lose sight of that</li>
<li>Using the tools is very instructive to improving them</li>
</ul>
<h3 id="500pm-interview-w-chris-edwards">5:00pm Interview w/ Chris Edwards</h3>
<ul>
<li>LunaServer - hosts assets over an socket/web API, provides undo/redo to tools connected</li>
<li>Takes incremental changes and generates a journal of recent changes on that machine, written to db, then file</li>
<li>Changes also live inside the memory of the server, Save is a feature of LunaServer, not the tools</li>
<li>MongoDB stores all the assets, overly tied to the way it wants assets formatted</li>
<li>LunaServer is coded with the formats that are relevant to the game, revising those formats need new build of LunaServer</li>
<li>If choosing to do things over again would not use a database for persistence of local asset state, keep in memory</li>
<li>Hooks for specific types of asset to invoke specific build steps and pass directly to the game, a cheat on the original design</li>
</ul>
<h3 id="530pm-panel">5:30pm Panel</h3>
<p>What’s our strategy and lessons learned wrt tools UI development</p>
<p>Andreas F, Chris E, Giac V</p>
<ul>
<li>Chrome auto updates were catastropic: Walk in monday and nobody can work</li>
<li>Web browser environment too restrictive for the heavy lifting for game development</li>
<li>JavaScript talent left the building and team members didn’t want to learn</li>
<li>Moving to all C++ tools, new and old editors go between LunaServer in realtime, verify each other</li>
<li>Want more parallelism in the tools and build</li>
</ul>
<h3 id="1030am-panel">10:30am Panel</h3>
<p>How do we manage our streaming issues?</p>
<p>Abdul B, Bob S, Jonathan A, Carl-Hendrik Skarstedt (Yacht Club Games), Chad Barb (Respawn Entertainment)</p>
<ul>
<li>Defining the streaming problem</li>
<li>Player movement impacts the requirements for streaming</li>
<li>Chad: Respawn mainly doing fixed size level and mip streaming, use HDD vs. optical</li>
</ul>
<h3 id="11am-interview-w-giac-veltri">11am Interview w/ Giac Veltri</h3>
<p>Guest: Matt Sharpe (Harmonix)</p>
<ul>
<li>Flash based node graph porting to native toolkit (Qt)</li>
<li>Flash had scaling issues, could take multiple minutes to load, native took it down to seconds</li>
<li>Matt: node based editors in Qt as well, for connecting events to the game (audio and visual effects)</li>
<li>JavaScript in the browser pain points, difficult contortion to get performance (manual cloning of template objects on anim frames.. eh?)</li>
</ul>
<h3 id="1130am-interview-w-jonathan-adamczewski">11:30am Interview w/ Jonathan Adamczewski</h3>
<p>Guest: Tony Albrect (Riot)</p>
<ul>
<li>Programming languages, native code and assembly</li>
<li>23m to build tools, engine, and game -> msbuild</li>
<li>Tony: FASTbuild works well, not connected to DBS because of working remote</li>
</ul>
<h3 id="130pm-panel-1">1:30pm Panel</h3>
<p>What strategies do we use to approach debugging and profiling issues</p>
<p>Jonathan G, Tony Arciuolo, Elan R, Jonathan A, Guest: Tony Albrect</p>
<ul>
<li>Jonny: Macros that instrument the codebase for on-screen profiler (standard stuff), hooked into RAD’s Telemetry</li>
<li>Tony: Debug shader to draw skinned characters at bind pose in contrast color was very helpful in Sunset Overdrive</li>
<li>Elan, Jon A: WPA is a fantastic tool for getting a lot of profile data</li>
<li>Elan: Dota2? (referenced indirectly) has $500/microsecond of server CPU cost at scale</li>
</ul>
<h3 id="2pm-interview-w-ron-pieket">2pm Interview w/ Ron Pieket</h3>
<p>Guest: Kalin (Funktronic Labs)</p>
<ul>
<li>VR headsets are annoying to take on and off when developing, TTY becomes less useful diagnostic tool</li>
<li>Funtronic is making an RTS, experimenting with content creation in VR</li>
</ul>
<h3 id="230pm-panel-1">2:30pm Panel</h3>
<p>How do we manage iteration time (including for programmers)?</p>
<p>Jonathan G, Andreas F, Eric Li (Harmonix), Danielle Cerniglia (1st Playable Productions)</p>
<ul>
<li>Programmer iteration isn’t frequently focus for compiler-writers, focus only on optimization</li>
<li>Don’t hesitate to implement workflow that cut entire departments out of iteration, free up human cycles</li>
<li>Eric: Ended up making a simple indirection/automation solution for an audio staff member to experiment w/ design idea</li>
</ul>
<h3 id="3pm-interview-w-tony-arciuolo">3pm Interview w/ Tony Arciuolo</h3>
<p>Guest: Matt Pettineo (Ready at Dawn)</p>
<ul>
<li>Matt: Q: What are the changes necessary to go multi-platform?</li>
<li>Tony: A: Mostly software API issues and data organization issues to fulfill those APIs</li>
<li>Matt: Any way to tie a performance issue back to content is very beneficial (shaders)</li>
<li>Tony: A tricky issue is when env + lighting fight over who will benefit from complexity added to the scene, programmers arbitrate</li>
<li>Matt: Tends to be lighters we arbitrate against becuase they are more technical, make them pick their battles</li>
<li>Tony: GI solution builds distributed across the studio</li>
</ul>
<h3 id="430pm-panel">4:30pm Panel</h3>
<p>How do we approach VR?</p>
<p>Abdul B, Bob S, David Neubelt (Ready at Dawn)</p>
<ul>
<li>4-5ms per viewport to draw the whole scene to keep framerate for VR</li>
<li>No motion blur, not god rays, etc… due to time</li>
<li>Draw good stuff, not more stuff</li>
</ul>
<h3 id="5pm-interview-w-bob-sprentall">5pm Interview w/ Bob Sprentall</h3>
<p>Guest: Garret Foster (Ready at Dawn), Chris Butcher (Bungie)</p>
<ul>
<li>Mike wants to talk about build systems (woot!)</li>
<li>Chris: Need good solution for quickly sharing changes with neighbor devs (w/o submitting and syncing I think)</li>
<li>Garrett: Can’t have multiple copies of the game on a hard drive, 2TB #head</li>
<li>Garrett: Custom p4 client views for different departments</li>
<li>Bob: Mongodb stores build metadata, moved into in-memory db (LMDB)</li>
<li>Bob: Cache was a side-project, scaled up out of side-project status, web-tech based cache didn’t perform well (mongodb/web-req)</li>
<li>Chris: Build dependency graph is pretty arbitrary, no hard lines between assets, heavy interdependence</li>
<li>Chris: Constant buffers built for environment interdepend bi-directionally on the shaders used to draw them, heavy build cost (whaaaaa)</li>
<li>Chris: Requiring more dependencies between assets shouldn’t necessarily be low in cost to prevent interdependence</li>
<li>Chris: A big regret is not seeing PC-centric content creation; would have made differenc choices since it has better random asset access</li>
<li>Bob, Garret: Pipeline has platform-independent step, then a platform-dependent step</li>
<li>Bob: Doubled up on processing identical data, took action to avoid it, but added on after the fact</li>
<li>Chris: Metadata for all the assets is several GB in and of itself (!)</li>
<li>Chris: The cache is pretty monolithic, corresponds to a single changelist (stable builds are identified and encouraged to sync)</li>
<li>Chris: Unstable/bleeding edge builds are available but more costly to sync to because caching isn’t complete</li>
<li>Garret: No toleration of missing assets, hard error; tolerating missing assets could lead to subtle bugs, skipped reporting tools to police</li>
<li>Garret: Building data in debug (w/ scribbling, init of memory pages) and in release and compare helps find determinism bugs</li>
<li>Garret: CI builds and populates the cache as changes commence</li>
<li>Garret, Bob: Peers/users do push data into the cache, problematic machines are found and fixed, breadcrumb left in the cache entry</li>
<li>Chris: Consideration of spatial relationships are important for caching, esp for caching baking problems (lighting)</li>
<li>Chris: Content creators always want shipping quality of the bakes (of course they do)</li>
<li>Chris: Bias toward baking techniques toward cachable solutions (umbra)</li>
</ul>
<h3 id="530pm-panel-1">5:30pm Panel</h3>
<p>What internal tools have we needed to develop to solve engine or tools related issues?</p>
<p>Jonathan G, Andreas F, Bob S, Jim Hill (Dreamworks Animation)</p>
<ul>
<li>Bob: Use the web-tech architecture of LunaServer to connect to the backend of users to look at build system state, make fixes</li>
<li>Andreas: <a href="https://github.com/deplinenoise/ig-memtrace">Trace</a> all the memory traffic and callstacks, transports over the network to a tool where you can scrub events</li>
<li>Andreas: Custom events at level loads, etc… tool allows you to see previous memory owners</li>
<li>Andreas: You need to get outside the comfort zone of your language of preference, doesn’t matter what it is</li>
</ul>
Manual Memory Leak Checking2013-08-13T00:00:00+00:00http://gorlak.dev/articles/2013/08/13/manual-memory-leak-checking
<p>Badly-behaving third party libraries sometimes allocate heap memory into global variables (or private static member variables). This can cause the shutdown leak check to dump false-positive leaking objects. They get in the way when trying to track down which allocations in your code aren’t properly matched with a free.</p>
<p>Fortunately the shutdown leak check is really just calling some other debug CRT functions, which are exposed to the user:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// capture debug heap state before doing stuff
_CrtMemState startState;
memset( &startState, 0, sizeof( startState ) );
_CrtMemCheckpoint( &startState );
// do stuff!
// dumps leaked blocks to the IDE's Output window
_CrtMemDumpAllObjectsSince( &startState );
// capture debug heap state after doing stuff
_CrtMemState endState;
memset( &endState, 0, sizeof( endState ) );
_CrtMemCheckpoint( &endState );
// assert that memory wasn't leaked
_CrtMemState diffState;
memset( &diffState, 0, sizeof( diffState ) );
_ASSERT( !_CrtMemDifference( &diffState, &startState, &endState ) );
</code></pre></div></div>
<p>This gives you the dump you expect during shutdown, but somewhere in your app where you can be more confident that every leak reported is a valid leak. It also generates an assert so you can trap in the debugger as soon as a memory leak was detected, a very handy thing for keeping your code honest in the long term. Keep in mind that other threads’ allocations will be included in this, so pick where you check for leaks very carefully!</p>
Code Organization Patterns2011-06-01T00:00:00+00:00http://gorlak.dev/articles/2011/06/01/code-organization-patterns
<p><em>This was originally published on <a href="http://altdevblogaday.com">AltDevBlogADay</a> in June of 2011</em></p>
<p>Lately I am settling into a new job over at Neversoft. There are some awesome folks over there, and I am really enjoying it so far. Along with starting a new job comes learning a completely different codebase. This can be especially arduous for tools folks since tools code typically sits atop a mountain of engine, pipeline, and foundation code.</p>
<p>In trying to wrap my head around an entirely new chunk of tech, I keep re-discovering patterns that make it easier to get your bearings on a lot of new code quickly. There are lots of these patterns that studios follow when organizing their code, and following these can make it easier to dive and and start getting work done (or just make getting work done in general). Some or all of these may be obvious to experienced engineers, but I figure it never hurts to reinforce best practices, and you never know when someone will have the total opposite opinion for really interesting reasons.</p>
<p><strong>Maintain just a handful of high level solutions so its easy to gain grand perspective.</strong></p>
<p>The lower the solution count in your project the better. Ideally they should all be in the top level folder of your code. The key here is to create awareness of the major chunks of technology in your project. I think most people agree that the bar should be low for any engineer to get in and look at tools, engine, or game code. The more you hide solutions within your code tree the more arcane knowledge is required to even know who the major players are in your codebase.</p>
<p><strong>Direct all compiler output to a single folder.</strong></p>
<p>Nothing hurts broad searches more than having large binary files mixed in with the source you are trying to search. It’s probably the reason why Visual Studio has preconfigured laundry lists of source code file filters in their Find in Files tool. If you redirect all your compiler output folders to its own root folder then broad searches gets orders of magnitude faster since it doesn’t have to wade through compiler data.</p>
<p>If your compiler output is directed to a separate dedicated folder then doing a clean build is just a simple matter of destroying the output folder and re-running your build. Explicit cleans are just slower, and its just easier to delete a folder when scripting things like build server operations.</p>
<p>Code generated via custom build steps counts as compiler output too! Add your output location an include path and #include generated code, even c/cpp files. Doing this keeps a very clear distinction between generated code and code which belongs in revision control (and hopefully you aren’t storing generated code in revision control!).</p>
<p><strong>Keep 3rd party library code and solutions separate.</strong></p>
<p>A big part of effectively searching through your codebase is being able to differentiate your code from external library code. Littering 3rd party libraries in with your own code can muddle search results.</p>
<p>Frequently its not necessary to clean build both 3rd party code and your project code, so having separate solutions can save time. It also makes performing search and replaces within solutions that only have your project code in them safer (you don’t want to search and replace within a 3rd party lib do you!?).</p>
<p><strong>Install large 3rd party SDKs directly onto workstations.</strong></p>
<p><strong> </strong>Revision control isn’t the only software delivery mechanism on the planet. Nobody should be making changes within the CellSDK, DirectX SDK, or FBX SDK so they shouldn’t be checked into revision control. These packages tend to be very easy to script for unattended installation (msiexec). This makes it easy to write a simple SDK checkup script to make sure that any given client (even build servers) have the latest kit installed.</p>
<p>Most large SDKs have environment variables that make them easy to find on the system, and even if they don’t you can typically assume where it should be installed. If they are missing it’s a simple thing to track down and install t (even for junior or associate engineers). Also, it never hurts to add compile asserts to validate that the code is being built against the correct version of those libraries.</p>
<p>If you happen to develop on a system with a package manager, they are awesome for making it easy to pull down 3rd party libraries directly off the internet. Microsoft’s <a href="http://coapp.org/">CoApp</a> project aims to do just that on Windows.</p>
<p><strong>Only check in binaries of what you cannot easily compile.</strong></p>
<p>The less compiled binaries you check in the better your revision control will perform, and everyone you work with is served better when revision control works well. Source code is much quicker to transfer and store on servers and peers. Not checking in compiled binaries means less waiting for transfers, less locking for centralized servers, and less long term size creep for distributed repositories.</p>
<p>Checking in built versions of libraries will create a headache for yourself in the future when you want to deploy a new compiler or support a new architecture (which will require you to recompile using a bunch of crusty project files that haven’t been used in months or years). It’s always worth a little extra time when adding a new external library to take command over your build configuration management. Sometimes this can involve making your own project files instead of using ones that may be included with the library source code. High level build scripting tools like Premake, CMake, and boost::build are worth spending time to learn, and can make hand-creating IDE-specific projects seem archaic. If updating external libraries in your engine is easy you will do it more often, and hence reap the benefit of more frequent fixes and improvements you don’t have to do yourself.</p>
Popstocks2011-03-01T00:00:00+00:00http://gorlak.dev/articles/2011/03/01/popstocks
<p><em>This was originally published on <a href="http://altdevblogaday.com">AltDevBlogADay</a> in March of 2011</em></p>
<p><a href="http://twitter.com/gorlak">Myself</a>, <a href="http://twitter.com/andybrk">Andy Burke</a>, <a href="http://twitter.com/kramdar">Rachel Mark</a>, <a href="http://twitter.com/marcsh">Marc Hernandez</a>, and <a href="http://twitter.com/Pacman2k">Paul Haile</a> have built a Facebook game. The goal was to spend 1 week building a game that would be fun enough and monetized well enough to grow into something that could generate some actual income. While we aren’t quite done with the game yet, I feel we have learned enough major lessons to justify a pre-launch Post-Mortem.</p>
<p><a href="http://www.facebook.com/apps/application.php?id=201641909847723"><img class="alignnone size-full wp-image-1829" src="/assets/popstocks-logo-large.png" alt="" width="150" height="150" /></a></p>
<p><a href="http://www.facebook.com/apps/application.php?id=201641909847723"><strong>PopStocks</strong></a> models a stock market, but instead of companies we trade shares in Facebook Pages as items of value. A Facebook Page must have at least 100,000 likes before we create a stock for it in our game, and the number of likes for that page dictates the total capitalization of the stock. Every stock starts out at 25 Pops (our in-game currency), and its value on the market is dictated by market orders, limit orders, and rumors. Rumors are actions that players can purchase with Facebook Credits, and can effect the market price of a stock. PopStocks also has a store with power-ups that can prioritize a player’s trades, provide advice on what to trade, and offer more in-game currency in exchange for Facebook Credits. We are currently in the process (this very weekend) of adding more items to the store that are for players that want to redeem in-game wealth for real-world discounts and items. We aim to be feature complete by Monday and just focus on polish and marketing buys through Friday.</p>
<p><a href="/assets/popstocks-excited-small.png"><img class="alignnone size-full wp-image-1827" src="/assets/popstocks-excited-small.png" alt="" width="150" height="150" /></a></p>
<p><strong>What Went Right</strong></p>
<ul>
<li><strong>Shared vision of what we were building.</strong> We spent our entire first day working through what the social and monetary implications were for all our ideas. PopStocks ended up being the idea with the lowest cost to implement, lowest amount of art required, and had the highest potential for monetization. The idea of a virtual stock market wasn't really new, but the idea of using Facebook itself as the source for stocks gave the entire team ideas about where to take our game.</li>
<li><strong>Google App Engine as a development platform.</strong> Python and Google's data model classes (backed by BigTable) made fleshing out new functionality a breeze. While the BigTable backend does have limitations with what is possible with its query construction, the fact that you can rearrange your data so quickly makes it an all around win. Google has made specific choices about what features are available in queries for performance and scalability reasons, and deciding to prohibit inefficient features causes your design to iterate toward something that is more optimal than if you had every feature of SQL. While Eclipse as an IDE and debugger leaves a lot to be desired (it suffers every trap of software too focused on plug-ins than unified workflow), its totally passible for a short project like PopStocks. Deployment to Google's servers is a breeze, and the backend control panel has just about every feature you could need.</li>
<li><strong>99 designs as a marketplace for art and iconography.</strong> From the get-go we wanted a game that was as minimal on art as we could get. We are all engineers, and didn't have the cash on hand to put a lot of money into art. We used <a href="http://99designs.com/">99designs.com</a> for only the important items we absolutely needed: a logo, a trophy icon, and some images of our broker character in various emotional states. We put up a 3 day contest and didn't see much we liked until the very last day where we got a submission that just hit out-of-the-park what we were looking for. For only a couple hundred dollars we had enough art to get to alpha.</li>
<li><strong>Every developer uses a different platform and browser.</strong> The entire team had coverage of Mac, Linux, and Windows, and used Chrome, Firefox, IE, and Safari to develop features with. This brought to light compatibility issues extremely quickly, and in general we were running in every browser all the time. Despite Eclipse's usability drawbacks, it has functioned very well on all our development platforms. We spent one day getting all our workstations configured as a full development environment and it hasn't been a time sink since.</li>
<li><strong>Multiple builds of the game in Facebook. </strong>We setup 3 actual applications in Facebook: PopStocks-devel which points to localhost/127.0.0.1 (we run a local Google App Engine SDK server instance so we can debug the Python), PopStocks Playground which runs on Google App Engine but has debug features enabled, and the final game application: PopStocks. Having Playground as an application in Facebook was invaluable for doing quick test passes before deploying to production. We could also completely burn the data store on playground after hacking it up for various development reasons. We did our initial testing with our friends on Playground before we were comfortable enough to launch the main game. We didn't want to ever have to wipe the data store for the production instance.</li>
<li><strong>Git and GitHub.</strong> Git is fantastic and GitHub makes it even better. We worked exclusively through a GitHub private repo, which is only $25/mo. We used its simple (but good enough) ticketing system for our issues as well as user-contributed bugs (contributed via a submission form in the game). GitHub was only down for 30 minutes in the middle of one of the nights we were working.</li>
</ul>
<p><a href="/assets/popstocks-disappointed-small.png"><img class="alignnone size-full wp-image-1828" src="/assets/popstocks-disappointed-small.png" alt="" width="150" height="150" /></a></p>
<p><strong>What Went Wrong</strong></p>
<ul>
<li><strong>Learning to program for the web.</strong> Everyone on the team had varying degrees of experience with developing for the web. Andy had the most experience (with Google App Engine, Python REST APIs, and Javascript), and probably spent half of his time just talking through with the rest of the team how things should work, and pointing out which mistakes were worth correcting and which were not. Time being a valuable commodity it is important to decide when to undertake refactoring passes (or when to burn entire features). Keeping the game running is important sometimes, but not others. We ended up building most of our features before doing any significant refactoring and then doing one massive reorganization to correct past mistakes several days before launch (last night).</li>
<li><strong>Facebook authentication.</strong> To authenticate as a user logged into Facebook one must parse a 'signed_request' which is delivered in a POST on initial page load. Being one of the first things your game would need to do, you would think there would be some handy example code in the official Facebook Python SDK. Yeah, not so much. Lacking good (or obtainable) documentation about what exactly a 'signed_request' is and how to parse one, it took quite a bit of digging to get it working. Turns out there is a pending pull request to add the necessary code to the official Facebook Python SDK, but nobody at Facebook has accepted that request yet. This burned about half a day... for something that should be documented as step 1 for building a Facebook application.</li>
<li><strong>Facebook stability issues.</strong> Sometimes Facebook will take its sweet time to reply to graph API requests... and sometimes it's graph API code will timeout because the site just takes too long to reply. Sometimes certain searches will break and fix themselves during the day. We haven't found much visibility into how to monitor these stability gaps, you just have to take them in stride. Also, if you use FQL, make sure not to use LIMIT 1 in your queries because sometimes it will just give you zero results instead of the first item you were searching for.</li>
<li><strong>Internet Explorer being what it is.</strong> <a href="http://bits.blogs.nytimes.com/2010/09/17/a-loophole-big-enough-for-a-cookie-to-fit-through/">CP=HONK</a> is the magic that is needed to make iframe cookies work in IE. Also, don't forget to omit the final comma delimiter in JavaScript lists. Also, setTimeout(). We had to make more exceptions for IE than any other browser, by far. Supporting IE 8 cost our project at least 1 man day.</li>
<li><strong>GDC.</strong> The Game Developer's Conference happened right in the middle of our production period, but there wasn't anything we could do about that. We lost more than half the team for an entire week which put a ton of work (and crunch) on a very few. It also took attendees some time to hit the ground running after focusing on other things at GDC for an entire week.</li>
</ul>
<p><a href="/assets/popstocks-happy-small.png"><img class="alignnone size-full wp-image-1841" src="/assets/popstocks-happy-small.png" alt="" width="150" height="150" /></a></p>
<p><strong>Conclusion</strong></p>
<p>Everything took longer than we thought it would, and we ended up tripling our 1 week deadline. Feature creep contributed about a week to our blown deadline. Halfway into the first week we saw that things were taking a long time to come online, and we started to worry that we wouldn’t have enough compelling gameplay to be viable to even our friends (much less complete strangers). If we had stuck to our original plan of a very minimal feature set we might have completed our game in two weeks, but it wouldn’t be nearly what it is today. Lets hope that bet pays off.</p>
Behind the Mirror2011-02-01T00:00:00+00:00http://gorlak.dev/articles/2011/02/01/behind-the-mirror
<p><em>This article was originally printed as a <a href="http://gdmag.com">Game Developer Magazine</a> article in the February 2011 issue. It’s also available online at <a href="http://www.gamasutra.com/view/feature/6379/sponsored_feature_behind_the_.php">Gamautra</a> as a sponsored feature thanks to my ex-Insomniac friend and coworker Orion Granatir, who also moved on to <a href="http://intel.com">Intel</a>.</em></p>
<h2 id="adding-reflection-to-c">Adding Reflection to C++</h2>
<p>Reflection is a programming language feature that adds the ability for a program to utilize its own structure to inform its behavior. Reflection has its costs, but those are often outweighed by ability to automate the following:</p>
<ul>
<li>Serializing objects into and out of a file</li>
<li>Cloning, comparison, search indexing, and network replication</li>
<li>Type conversion (copying base data between derived class instances)</li>
<li>User interface generation and data binding</li>
</ul>
<p>Of course all of these tasks can be accomplished without reflection capabilities, but you will likely pay higher costs having to write code that is very rote and prone to error. A good implementation of reflection can provide a platform on which each of these problems can be solved without glue code in every class that desires these features.</p>
<p>At the highest level reflection can encompass many different features:</p>
<ul>
<li>Runtime knowledge of class members (fields and methods)</li>
<li>Dynamic generation and adaptation of code</li>
<li>Dynamic dispatch of procedure calls</li>
<li>Dynamic type creation</li>
</ul>
<p>However, for the purposes of this article I will define C++ Reflection to mean: having access at runtime to information about the C++ classes in your program.</p>
<h2 id="rtti">RTTI</h2>
<p>Before diving headlong into how to add reflection to C++, it’s worth noting what type information is already built-in. The C++ language specification provides minimal information about the classes compiled into a program. When enabled, C++ Run Time Type Information (RTTI) can provide only enough information to generate an id and name (the typeid operator), and handle identifying an instance’s class given any type of compatible pointer (dynamic_cast<>).</p>
<p>For the purpose of game programming, RTTI is often disabled entirely. This is because its implementation is more costly than a system built on top of C++. Even if a program only makes a handful of RTTI queries, the toolchain is typically forced to generate, link, and allocate memory at runtime for information about every class in the application (that has a vtable). This significantly increases the amount of memory required to load your program, leaving less memory available for face-melting graphics, physics, and AI. It’s better to implement your own RTTI-like system that only adds cost to the classes that need to utilize it. There are plenty of practical situations where vtables make sense without needing to do runtime type checking.</p>
<p>Thus, the first step in implementation of a reflection system is typically a user implementation of RTTI features. This can be accomplished with only a couple of steps.
Type information can be associated by a static member pointer (which also makes a good unique identifier for any given type within the program). In addition, some virtual functions allow querying an object’s exact type, as well as test for base class types:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Returns the type for this instance
virtual const Type* GetType() const;
// Deduces type membership for this instance
virtual bool HasType( const Type* type ) const;
</code></pre></div></div>
<p>GetType returns a pointer to the static type data, and HasType compares the provided type against its static type pointer as well as every base class’ type pointer. This gives us all the information needed to re-implement dynamic_cast<>, but it only adds overhead to classes that are worth paying the added cost of type identification and type checking.</p>
<h2 id="visitor-pattern">Visitor Pattern</h2>
<p>The simplest technique for implementing reflection is to take a purely programmatic approach. Virtual functions can be a mechanism for the traversal of all fields in a class.
The visitor design pattern provides an abstraction for performing arbitrary operations on the fields:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// a base class for any object that
// wants to reflect upon any class' fields
class ObjectVisitor
{
public:
virtual void VisitField( int32_t&amp;, const char* ) = 0;
};
// an example of a class that would write/read
// from/to each field to/from a file
class SerializeVisitor : public Visitor
{
public:
virtual void VisitField( int32_t&amp; value, const char* name )
{
// do serialization work
}
};
// a base class for some of your reflection-aware objects
class Object
{
public:
virtual void Accept( ObjectVisitor&amp; visitor ) = 0;
};
// an example of a derived class that has a reflected field
class Foo : public Object
{
public:
virtual void Accept( ObjectVisitor&amp; visitor )
{
visitor.VisitField( m_Number, "Number" );
}
private:
int32_t m_Number;
};
</code></pre></div></div>
<p>This is a textbook implementation of the visitor design pattern. Objects deliver the visitor to each one of its fields and the visitor gets an opportunity to transact with each field in series. It offers excellent encapsulation since the object does not know or care about any implementation details of what the visitor is trying to accomplish.</p>
<p>This technique does not require data from an external tool to do its job since it’s implemented entirely in the code compiled into the program. It’s simple to step through and debug, and extensible since many operations can be implemented as another class of Visitor.</p>
<p>With this approach, the development cost is small. A single line of code for each field in every class in your codebase is a fair price to pay to attain the benefits reflection can provide. However, there are some drawbacks with using a visitor function for reflecting upon your objects. There are a lot of virtual function calls happening to interact with each field in a class. This is a concern for performance critical code and on certain platforms. Also, this technique is best suited for operations that want to visit every single field of a class. There are many situations where this work is not required, and iterating over every field just to access a few is wasteful and time consuming (depending on the size of the object).</p>
<h2 id="data-model">Data Model</h2>
<p>To really take reflection to the next level it’s necessary to be able to address specific fields and read and write data without iterating over every field in the class. A data model that represents the classes and fields specified in the code is needed to accomplish this. At runtime your program can reflect upon this model to interface with objects and their field data.</p>
<p>This data model is owned by a central registry of type information. This singleton object owns all the type information in the program and can have support for finding type information by name. It’s also a central point where a map of the entire inheritance hierarchy of classes can be built. The registry can be populated by employing a parser tool to analyze your source code, or by adopting a method similar to the visitor function approach to populate this data model at program startup.</p>
<h2 id="to-parse-or-not-to-parse">To Parse or Not To Parse…</h2>
<p>Using a parsing tool to analyze your code introduces a lot of complexity. C++ has a very complex syntax. While there are some tools you can take off the shelf to do the parsing, there is still a lot of work to do to make that data usable at runtime. Typically, you want to extract just the necessary data from the abstract parse tree and write out a meaningful representation of only the data that is required for what you want to reflect upon. Templates, typedefs, functions and other language features are generally overkill for the purpose of reflecting upon fields in a class.</p>
<p>A parsing tool is probably going to do one of two things: write a data file to be loaded at runtime (or packed into the executable as a global variable or resource section), or it’s going to generate some code that gets compiled into your program.</p>
<p>If you choose the data file route, you have the added task of computing member size and offset information. This information is compiler specific and target platform specific. By
choosing this approach you are committing to abide by the padding and alignment rules of whatever compiler you use to build any given version of your program. Another source of complexity comes from the existence of two independent pipelines processing information about your code: the compiler and the parsing tool. This necessitates synchronizing the data output by the tool with the specific version of the compiled program, which will make packaging and deploying your program harder. Synchronization is a very important problem to solve in this approach because not detecting out of sync reflection information can cause nasty bugs (and potentially mangled data).</p>
<p>If you choose to generate source code to be compiled into your program, you inherit the burden of the complexities that come with creating a code generator that is most likely specific to your particular needs. The code generation tool will probably need to make a bunch of decisions about how your code needs to be decorated and organized. These requirements will change as your codebase evolves, and it will require you to be diligent about releasing and configuring your own build tool. Also, maintaining a tool that governs the ability to compile your game is risky because it has a tendency to break at the worst possible time (during a milestone).</p>
<p>The reward for using these approaches is tangible. You don’t have any code that needs to be written by hand to reflect upon your classes. If you choose to generate code, then you will also probably get great performance since you can generate function bodies that do specific operations on every field of your classes, just like you would have done if you weren’t using reflection at all.</p>
<p>In reality, there are a ton of moving parts when using this approach. Things can break in hard to trace ways if any step of the pipeline doesn’t work as expected. Having implemented and maintained this technique for many years, I can tell you that there are days when it feels like the planets have to align for all the parts in this complex pipeline to actually work together in harmony.</p>
<h2 id="hand-coding">Hand Coding</h2>
<p>Alternatively, code can be written to populate the reflection data model when our program starts up. This code creates class information structures, populates them with information about every field within the class, and adds them to the registry. Writing this code sounds arduous, but C++ template support provides some excellent tools to accomplish this with remarkably concise and manageable code. A good goal for this is to extract as much information as possible in a single function call per field, per class (just like our visitor function). This allows us to avoid any time spent at build time processing source, managing dependencies on build tools, dependency checking generated code, and synchronizing externally loaded data.</p>
<h2 id="polymorphic-data">Polymorphic Data</h2>
<p>Because containers in C++ are template types instead of concrete types, function overloading can only take us so far. Since each template instantiation is a completely different type, trying to support containers using a visitor pattern could lead to a combinatorial explosion in the number of overridden functions. Enumerated data types present the same challenges. It’s not easy to support them via overloading since every enum in the entire game would need a different overload.</p>
<p>A solution to this shortcoming is to delegate the handling of any piece of data to a separate class of object that can interface with individual fields using a pointer. This will give us the ability to operate on any data in a polymorphic manner, including integer, floating point, and enumerated data types. Many languages that require derivation from a canonical Object class do this already. Adding support for treating simple types with polymorphism doesn’t mean that it’s necessary to use these polymorphic versions of these types everywhere in your code. They will only be used to abstract away the implementation details of dealing with serializing, comparing, and converting data to and from human readable strings (which is very handy for generating property UIs).</p>
<p>Truly polymorphic data can solve many edge cases and provide extensibility for user types like enums and exotic containers. It can also support user data types that need custom processing during serialization. If these data classes store a value in addition to working through a pointer, they can be used to interface with fields and store standalone data. This allows for interoperability between versions of the program that have slightly different fields without discarding this “unknown” information. This is a major coup for game development tools that revise sets of properties frequently between releases. You can publish a test release with a very different set of properties and know that if content creators check in some of those files they probably won’t break folks still using the stable production tools (since the stable tools data is still there in the files).</p>
<p>Every field in the reflection information will specify a class of object that will handle the details of reading and writing the necessary data to a persistence interface or other objects of the same type. With this in mind, it’s time to declare some data structures to store Class and Field information:</p>
<h2 id="class">Class</h2>
<p>Class stores all of the information for a class of object in your code (client object or data object):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>struct Class
{
const Class* m_Base; // our base class
Array< const Class* >; m_Derived; // our derived classes
const char* m_Name; // our name (user-friendly)
Array< const Field* > m_Fields; // fields of this class
Class( const char* name )
: m_Base( NULL )
, m_Name( name )
{
}
};
</code></pre></div></div>
<h2 id="field">Field</h2>
<p>Field stores information about a particular member variable in a class. Fields are stored in an array in the Class object that owns them.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>struct Field
{
const Class* m_OwnerClass; // the class this is within
const Class* m_DataClass; // the class of data
// that serializes the field
const char* m_Name; // name of the field
size_t m_Size; // the size of the field
uintptr_t m_Offset; // the offset to the field
Field( const Class* owner, const Class* data,
const char* name, size_t size, uintptr_t offset )
: m_OwnerClass( owner )
, m_DataClass( data )
, m_Name( name )
, m_Size( size )
, m_Offset( offset )
{
}
};
</code></pre></div></div>
<h2 id="populating-the-data-model">Populating the Data Model</h2>
<p>To help populate the data model, some template functions can help extract useful data via template parameters:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>template< class ObjectT, class DataT >;
Field* AddField( Class* owner, DataT T::* field, const char* name,
const Class* data = NULL )
{
// call out to a template function that is specialized
// to return the appropriate data class for this type of field
// also compute the offset from the base pointer
// using the pointer to the member variable
Field* field;
field = new Field( owner,
data ? data : DeduceDataClass< DataT >(),
name,
sizeof( DataT ),
GetFieldOffset( field ) );
owner->m_Fields.Push( field );
return field;
}
</code></pre></div></div>
<p>A template function with parameters for the object type and variable type provides an easy way to extract the size of the variable, its offset from the base instance pointer (using a pointer to member variable), and it supports the use of template specialization to deduce which type of data object is applicable to this field. Three important things are happening in this function to extract data for our reflection data model: pointer to member variable C++ syntax, translation of this syntax into an offset from a base object address, and the use of deduction using explicit specialization.</p>
<h2 id="pointer-to-member-variables">Pointer to Member Variables</h2>
<p>Pointers to member variables are a pretty infrequently used aspect of C++. It does what you might expect, but its syntax is strange if you haven’t seen it before:
int32_t Object::* pointer_to_member_variable = &Object::m_Member;
These are typically dereferenced with an instance of the object type (just like member function pointers):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Object object, *pointer = new Object;
int32_t value1 = object.*pointer_to_member_variable;
int32_t value2 = pointer-&gt;*pointer_to_member_variable;
</code></pre></div></div>
<h2 id="translation-into-offset">Translation Into Offset</h2>
<p>To compute the offset from a pointer to a member variable:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>template< class ObjectT, class DataT >
uint32_t GetFieldOffset( ObjectT DataT::* field )
{
// a pointer-to-member is really just an
// offset value disguised by the compiler
return (uint32_t) (uintptr_t) &( ((ObjectT*)NULL)->*field );
}
</code></pre></div></div>
<p>This function doesn’t bother with allocating an instance to dereference the pointer to member variable. It substitutes a NULL pointer, dereferences the pointer to member variable, and uses the address operator to yield the offset (from NULL) at which the pointed member exists. Some of this syntax may seem strange if you haven’t used it before; but it’s a perfect fit for maximizing what information is needed to describe a field in a single function parameter.</p>
<h2 id="explicit-specialization">Explicit Specialization</h2>
<p>DeduceDataClass is a good example of template deduction using explicit template specialization. This deduction technique is a way of using the C++ template mechanism to allow for the automatic selection of some information by the template compiler based only on a template parameter. The default template function’s implementation returns NULL, indicating that the deduction failed since no specialization was found to find the associated data:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>template< class DataT >;
Class* DeduceDataClass()
{
// unknown data!
return NULL;
}
</code></pre></div></div>
<p>Then create an explicit specialization for every type that can be deduced:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>template<>;
Class* DeduceDataClass()
{
// this specialization associates the uint32_t
// built in type with an object class that can
 // process data of type uint32_t with respect
// to other persistence / cloning / mining code
return SimpleData< uint32_t >::s_Class;
}
</code></pre></div></div>
<p>In this case, a pointer is returned to the class reflection information for the type of data object to be used when dealing with the built-in type passed into the template argument.</p>
<h2 id="putting-it-all-together">Putting It All Together</h2>
<p>One more template will help keep the code that registers classes at startup concise:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>template< class ObjectT >;
static Class* CreateClass( const char* name )
{
Class* result = new Class( name );
// populate the field information for this class
ObjectT::Populate( *result );
return result;
}
</code></pre></div></div>
<p>Finally, an example class and main that will put all of this code to work:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Foo
{
private:
uint32_t m_Number;
public:
static Populate( Class& c )
{
// AddField is a template function that will deduce
// everything but what the desired name is.
Field* f = c.AddField( &amp;Object::m_Number, "Number" );
// Its easy to imagine Field having extra information
// to inform all sorts of program behavior
// f->SetRange( 0, 10 );
// f->SetCategory( "Advanced Settings" );
}
};
void main()
{
Registry::RegisterClass( CreateClass< Foo >( "Foo" ) );
// program goes here
 Registry::UnregisterClass( Registry::GetClass< Foo >() );
}
</code></pre></div></div>
<h2 id="conclusion">Conclusion</h2>
<p>Reflection can imbue an enormous amount of flexibility to your game engine, but this flexibility doesn’t come without cost. However, the extra memory reflection data consumes is balanced by the time saved implementing features more rapidly. The ability to deliver changes to your users quickly, and with minimal engineering overhead will pay dividends as your user base grows and your production time stretches across multiple titles.</p>
<h2 id="open-source-implementation">Open Source Implementation</h2>
<p>Helium is an open source game engine toolkit that contains an implementation of C++ Reflection. Much of the code in this article was derived from it. It uses a BSD-style license, and is available at <a href="http://heliumproject.org">heliumproject.org</a>. The reflection system itself is located in the Foundation/Reflect folder within the source repository.</p>
<h2 id="bio">Bio</h2>
<p>At the time of this writing, Geoff Evans was a Senior Engineer at WhiteMoon Dreams, Inc. in Los Angeles, CA. He was a founder of the Nocturnal Initiative open source project at Insomniac Games, and is a founder of the Helium Project (started at Whitemoon Dreams), which aims to build an open source commercial quality game engine.
</p>