Tag Archives: gaming

Gamers Are Dead

Those three words, an implementation of a rather tired and banal journalistic cliché indicating that a fad is over, sparked a rather disgusting outpouring of hatred in the world over the past month. And I risk reigniting it with this article, but I do so knowingly and willingly, because, quite frankly, it doesn’t go far enough. The concept of being a “gamer” is not only a dead concept, but its rotted and decaying corpse is being paraded around on strings, made to dance for the whims of a handful of misanthropes who are desperate to cling to the only piece of identity that they have left. Gamers aren’t just dead, they’re undead. And like the undead, we have to stop this zombie outbreak before it threatens the world.

The idea of being a “gamer” was originally created as a result of advertisers trying to pigeonhole the customers who were buying their clients’ products into a cohesive demographic at a time when there wasn’t one. Nobody knew what magazines to advertise games and consoles in, outside of general computing magazines. Eventually, people like Larry Flynt picked up on the fact that there were people out there who were buying those computing magazines only for gaming coverage. Flynt then bankrolled the creation of Video Games and Computer Entertainment, one of the first post-Crash magazines to cover the newly resurrected industry exclusively. VGCE was one of the first magazines that had to wrestle with a new problem: what else could we sell to people who bought video games?

Eventually, that question began to answer itself in a rather distressing form: “more video games”. I still remember with a certain twisted sense of fondness the overambitious and somewhat dodgy advertisements in the first few issues of VGCE, for things like arcade-style NES controllers and a mail-in trade-in store (which itself would be unsettlingly prescient a couple decades down the line). But by and large, advertisers in the magazine were restricted to games and gaming stuff. A huge part of this could also be understood to be reticence on the part of more mainstream advertisers; remember that the Crash had only been a few years prior, and that as of 1983, “video games were dead”. Unsold copies of E.T. stood as grave markers in Hills and Ames, memorializing the amount of time, energy, and space wasted on what was still going to be seen as a fad for another twenty years.

Faced with this mindshare ouroboros, consumers of video games began to self-radicalize. I note with some amount of wry amusement that I’m using the 2014 sense of the word “radical” to refer to people who would be using the 1987 sense of the word. Obsessiveness with games became a hallmark of people who played them. This wasn’t out of any addictiveness of the games themselves (though let’s be honest, Tetris is a hell of a drug) but rather because they simply weren’t exposed to anything else. Rather than attempt to bring them into the circles of other interests– which would be counterproductive to the goal of making more money on video games– the publishers of the now-somewhat-established gaming journalist corps, consisting of magazines like Gamepro, VGCE, and Electronic Gaming Monthly, began actively excluding outside interests from their magazines. One of the irregular features of Nintendo’s in-house publication (Nintendo Power) had been to highlight a celebrity or other outside luminary who was a fan of their games, in the hopes of leveraging some of the rather devoted Nintendo fanbase to that celebrity’s newest project. Obviously, it failed dramatically, and by the third or fourth anniversary of the magazine it was a distant and frequently covered-up memory.

In the post-Crash landscape of the video game hobby, this singular focus on promoting games and only games was, arguably, necessary to ensure the survival of the hobby as a whole. Thus, when faced with the question of identity, consumers really only had the carefully-crafted and exclusive concept of “gamer” to fall back on. It was what they did for fun, and they didn’t really have that many other outside interests. It made sense, of a sort, to say that one was a “gamer” in the sense that one could also say they were a “fisher” or a “reader”. And at the time, there was nothing wrong with that, because the hobby was still small enough that a game that didn’t have widespread support would be too risky to release, and it was more efficient at the time to produce games that fit the demographic than it would have been to advertise to expand the demographic.

A couple years back, I wrote a somewhat meandering series of posts on the differences between being a “community” and being an “enclave”. When I wrote that, in early 2012, I was still thinking primarily of the rather heated backlash against efforts that had been made to expand the market for consumers of video games. The Wii, hallmark of what “gamers” considered to be everything wrong with those efforts, was five years old, and the Wii U was just around the corner. What was remarkable about the concept of the Wii, and by extension Nintendo’s “blue ocean” strategy, wasn’t that there were supposedly “weak” games being made but rather than Nintendo– one of the companies most directly responsible for the recovery of the North American video game industry after the Crash– was recognizing that the approach of exclusive reinforcement was no longer viable. The industry was no longer in its crisis mode; it did not live and die over the success or failure of the Next Big Thing. It hadn’t since 1992, and Mortal Kombat.

Most people nowadays think of the original Mortal Kombat as a rather poorly-designed Street Fighter clone that was notable only for its gore levels (which are tame by today’s standards). However, it was a huge risk for an industry that was slowly coming to terms with the fact that its consumer base was going through adolescence. Games like the Super Mario and Sonic series were perennial sellers; anyone could pick them up, and they appealed to kids of all ages. But the people who had bought the very first iterations of Mario and suchlike were now being seen as “outgrowing” the idea of video games, and Midway took a huge gamble in creating a game that was more “mature”. For better or worse, the gamble paid off. MK became a smash hit, and while it wasn’t universally praised or even universally bought, it was successful enough to not only kickstart a new franchise, it also opened up the market to a newer section of players. It was the “blue ocean” strategy before it was called “the ‘blue ocean’ strategy”.

And that’s where the wheels fell off. Mortal Kombat was a risk, and it had opened up the market to newer players who were then inculcated into the self-affirming and exclusive “gamer” demographic. Rather than learn the lesson that there were untapped markets out there waiting to expand the numbers of potential consumers, the industry as a collective whole decided to simply strip-mine the metaphorical “new challengers” for everything they were worth. It baffles me that nobody at the time was taking the long view, and realizing that the whole of the industry didn’t collapse just because one game wasn’t “for everyone”. What happened instead was a culture of immediacy: starting in the mid-90s, there was a “new mega-hit” every few months, which would seize the whole of the market for a time, then bow out in favor of the next one.

This paradigm, a steady rhythm of games that would explode onto the scene and then fade away, attracted the attention of people who had started out playing Super Mario Bros. in their elementary and middle school days, but were now graduates of college with degrees in computing. There was money to be made in being that Next Big Thing, even just once. This resulted in a population explosion in the development and production sphere, like a 32-bit Baby Boom. More than that, though, the rise of the Internet in the mid-to-late 90s made it possible for smaller developers to not only target their work directly to their customers for far less than it would cost to do so through traditional channels, it enabled those developers to come together in the first place to see the underserved sections of the population who might want to play a game once in a while. The first cracks in the wall built around the idea of being a “gamer” were forming, in the shape of three gems in a row.

Originally a Flash game, Bejeweled, by the studio that would eventually become PopCap Games, was one of the first in a series of simple to play abstract puzzle games that would define the schism between so-called “hardcore” and “casual” games. Again, it was the wrong lesson being learned, this time not by the industry but by the traditional consumer base. Flash (and its predecessor, Shockwave) were technologies that made it easy for “ordinary” people to play games on their computers. There was no setup, no tinkering necessary, just put a URL in and start playing. The money in these games wasn’t in selling access, but rather in the advertisements surrounding them. Ads which, thanks to peeking in on what the user did on the computer when they weren’t playing the game, had a much better understanding of the user’s habits and preferences than Larry Flynt’s team did back in 1987. Advertisers started to realize that people who played games did more than just play games.

It went the other way, too. In the runup to the release of the Dreamcast, Sega embarked on a massive advertisement campaign that rivaled any before it for a video game product. While Sony’s “U R NOT (red)E” campaign in the mid-90s had garnered some limited exposure, the “It’s Thinking” ads for the Dreamcast were everywhere. Television, magazines, billboards, you name it. It worked, to an extent; the Dreamcast enjoyed several months of success before Sega cut the legs out from under it. That’s not the point, though: it showed other developers that ads in “mainstream” media worked. Soon EA’s Madden NFL series started having ads in sports programming of all kinds, introducing a smaller secondary population boom into the consumer base– one which only bought sports games, continuing both the culture of enclave-building and the proof that there were distinct segments of the market within the market.

It wouldn’t be until 2006, with the release of the Wii, that a company outright addressed this segmentation of the market with games for all. From the very beginning, the Wii was meant for an audience that was not already on board; Nintendo at the time asserted that “mature gamers” were already well-served, and that there was an entire generation of people who would be willing to play games if only they weren’t so complicated or insular. The traditional consumers reacted with disgust; the rest of the world, however, reacted by opening their wallets. The Wii was a massive cash injection for Nintendo. The industry again learned the wrong lesson, leading to the flood of shovelware for the Wii and its contemporaries, all trying to cash in on the second coming of the “fad”. When the churned-out “simple” games failed to catch on, mostly because they looked and felt cheap and had no forethought put into them, motion controls were picked up on as the “reason” for the Wii’s success, and quickly imitated. Again, these were mostly failures, and even Nintendo began shying away from its motion controls later on.

This left those people who had forged their identities around the concept of being “gamers” in a bind. For over twenty years, these people had had an entire industry at their beck and call; games had been made “for them exclusively”, and if a game was a smash hit, it had almost universal acclaim within the enclave. Now, though, there were huge schisms among “gamers”, those who liked how the expansion was happening, and those who felt betrayed. Anger multiplies faster than understanding, especially on the Internet, and soon the dominant meme in the so-called “community” was “Are ‘casual’ games destroying gaming?” In 2008, this was a pressing and worrying question. In 2014, we finally have our answer:

“Yes, they did. And doing so was a good thing.”

To be blunt, the idea of being a “gamer” as a sole indicator of one’s identity is an outmoded and dangerous concept, and must be discarded. It was carefully crafted from 1986 to 2006 through a sweet and subtle brainwashing, the Kool-Aid of which I freely admit that I drank deeply. The generation that saved an entire industry in North America should be proud of having undone what the Crash of 1983 did, but that in no way means that they should rest on their laurels. But like a parent clutching the bike long after the child has proven they can balance without the training wheels or their support, the “gamers” are now doing more harm than good to the industry they love. The “Weekend At Bernie’s“-style manipulation of the remains of the identity has to stop.

Bereft of any other identity, though, is a dangerous way to go through life, and the past few months have been perfect crystallizations of exactly why that is. Under the pretense of necromanticizing the “gamer” label, hatred and evil have crept in to fill the void. The ouroboros is now digesting itself; the circle is closing. People who once held the label of “gamer” are being used; having proven that they can be molded and manipulated as a whole, they are a useful tool for pushing an agenda of marginalization. It’s a conspiracy of the perceived majority: deluded into thinking that they speak for everyone, “gamers” are pushing back against the inevitable understanding that “gamer” and “person who plays games” are no longer exact synonyms.

The recurring theme throughout this entire story is that the video game industry has actively resisted very nearly every attempt to “grow up” that it has ever been backed into trying. It’s unsurprising, and it’s a little bit sad. But if you’re looking for the bright side in all of this, take this one: I personally feel that video games have given me so much more than I could have hoped to experience on my own. The dorky little eight-year-old me who played Kid Icarus went on to check out books on mythology from the library, lecturing my relatives on the Greek gods at every chance I got. The fourteen-year-old me who was enthralled by the concept of magical technology in Final Fantasy VI started writing (bad) stories about a world where magic replaced electricity. The twenty-one-year-old me parlayed a love of Pokémon into a job at a game store, where I could share the games that I loved with people who came in– where I could bring people the joy that games had brought me. Being a “gamer” has given me the seeds– and only the seeds– to most of the good that I’ve accomplished in my life. But it’s been the rest of my life that made those seeds germinate and bloom.

Gamers are dead. Long live those who survived being gamers.

The Gates

The last few months, as I’m sure you’re aware, have been pretty rough within the tech and video game industries. It started with a disgruntled post on a message board, claiming to be from an ex of a notable female video game developer. The post accused that developer of having had a brief romantic relationship with a writer for a game review website. This sparked a discussion of ethics in video game journalism and the interconnectedness of reviewers and developers, both metaphorical and literal.

The review website conducted their investigation, found no evidence of wrongdoing, and revised its policies to prevent the appearance of impropriety in the future. In any other industry, with any other individuals involved, and any other consumers raising the alarm, that would have been the end of it. It has since descended into a maddening maelstrom of abuse and hate, with invasions of privacy being perpetrated both for and against “the cause”. Rather than look at the currently-ongoing imbroglio, though, I think it’s important to step back and take a look at both how the metaphorical tropical depression escalated into a full-blown hurricane of hostility, and how this is only a bellwether of what is going to happen in the future.

The first thing you need to be aware of is that the seeds of the current mess were sown with the hashtag “#gamergate”. A hashtag (in case you haven’t already been bludgeoned over the head with the word since Twitter hit critical mass in 2010) is a text marker, preceded with the “hash” or “pound” symbol, that is added to designate a tweet as being part of a larger conversation. For example, when television shows air, you’ll often find (either in place of or right above the ubiquitous “bug” station identifiers in the lower right-hand corner of the screen) a hashtag that is recommended for use when tweeting about the show; in my preferred case, “#PoI” would be how I would scream to the world that I was referring to the currently-broadcasting episode of Person of Interest. Hashtags run the gamut from terse to almost taking up the entirety of the 140 character limit for tweets, but the important thing to remember here is that they are a method of self-selection: the person writing the tweet consciously chooses to apply the label to the tweet– and by extension, themselves, if that hashtag is representative or symbolic of something greater.

So we have this hashtag of “#gamergate”, and we have some people who have chosen to use it when tweeting or writing about video games. Great. But the problem here is one that is unique to semi-anonymous electronic communications: anyone can use the hashtag on any tweet. There is no hierarchical leadership within Twitter that dictates exactly how the hashtag is to be used. This is by design, to an extent; the hashtag is a bit of metadata, as opposed to data in and of itself. (Metadata is, at its core, data about data. If you think of your car as a piece of data, the fact that it is blue is metadata.) Where things get interesting– and by “interesting” I mean “Oh God what happened I turned my back for like five seconds and it’s all on fire now”– is that there is also no authority structure to say that the tag is being misused.

Let’s step away from the internet for a moment and take a look at a similar phenomenon, one that exists in the American political system, specifically bill riders and earmarks. Within the two houses of the US Congress, laws begin their existence as bills to be proposed on the floor of the House of Representatives or the Senate. (The rules for what starts where are arcane and beyond the scope of this article.) Now, these bills have pretty straightforward aims at their genesis: let’s take a hypothetical example of a bill that says owners of blue cars get a 1% tax break for a year. (What? I like blue cars.) The bill has to reach a certain level of approval within the originating committee in the House or Senate before it can be presented to the legislature at large, where it has to reach yet another threshold before going to the other house, where– you guessed it– they have to approve it at a certain level. Now, say for a moment that the senator from Pennsylvania has an objection to the bill, because (probably due to cadmium deposits found in the old coal mines or something) blue cars don’t sell nearly as well in his state. The PA Senator can ask for a change to the bill– a rider– that says that in Pennsylvania, the break is extended to red cars as well. These riders, it should be noted, don’t have to have anything to do with the original bill; a Senator from Connecticut could ask for a rider approving several million dollars of federal funding for a bridge repair. You can repeat the process for however many legislators it takes in order to get past the threshold of approvals before bringing the bill to a vote. This creates a rather difficult catch-22 for the legislators who have to choose between supporting a bill that has unpleasant riders, or voting against a bill that could do good because of the riders attached to it.

How does that tie into video games? Remember that “#gamergate” is both anarchic (as in it has no authority structure) and self-selected. Anyone can attach the hashtag to tweets of all kinds; ranging from demanding stronger ethical conduct rules in video game journalism, to detailed rape and death threats against developers and their families. In a historically traditional view of anarchy-advocacy (that is, people arguing for anarchy as a method of self-governance), the group should be policing itself and clamping down on the destructive behavior of the latter. It isn’t. If anything, the tag has been co-opted by those threats and tyrants (and I use the word “tyrant” in its vernacular sense of “violent autocrat”, not my usual tongue-in-cheek definition of “necessary minimally-exercised authority”). The people arguing for greater transparency in video game journalism are being drowned out by those who would see game developers driven from their homes simply for making games that questioned the status quo.

So this presents an interesting question: which is the real face of the “#gamergate” movement, terror or accountability? The answer is, frankly, both. Because, and this is an important distinction, it is possible to be correct without necessarily being right.

I’m not arguing that there are no problems with how video games are covered and how they are presented to the public. The gaming press absolutely is complicit in the current state of affairs where games receive massive amounts of hype prior to release only to be abject trainwrecks. I’m also not arguing that there doesn’t need to be a greater female and minority voice within the video game industry. The most interesting and engrossing games I’ve played over the last five years have been female-developed. But what is undeniable is that it’s now not possible to declare any of these points without being implicitly associated with sociopathic jerks. That perception is self-perpetuating: it drives away people who would be a moderating or mitigating voice and attracts, well, more sociopathic jerks.

Unfortunately, the “#gamergate” hashtag is beyond salvaging as it currently stands. It’s highly unlikely that the voice of reason could ever regain control of the narrative that’s attached to the tag. The movement has, as it was destined to do, moved on. And this is just the beginning of something we’re going to see a lot more of in the future.

What’s interesting to note with regards to the “#gamergate” phenomenon is that, for both of its goals, it achieved success. It started as an outcry against corruption in video game reporting, and it resulted in getting several sites to re-evaluate their policies. It then moved on to terrorizing women in video games, something that also gained swift results. The problem is that logically, these should have been two completely different movements, and people who pushed hard for one goal found themselves, and their credibility, being washed into a tsunami of support for the other, which they may not have had any desire to.

I’m reminded of a quote from H.P. Lovecraft’s The Case of Charles Dexter Ward: “Do not call up what you cannot put down.” The people who started fighting for the cause of transparency found easy and fast allies in misogynists and psychopaths, and didn’t stop to think that when the goal of transparency was achieved, it might be a little hard to dial back the frothing anger stirred up in their erstwhile allies. Now they find themselves in the back-seat, desperately trying to reclaim the reins of their movement from the people they egged on just days– or even hours– before. This prompts the feeble cries of “we’re about transparency!” when the label is overwhelmingly being used to justify threatening and stalking women in the industry.

If there was a chance to separate out the two sentiments, it has long since passed. The beast, called up from the depths, can no longer be put down. All we can do now is close the gates, to prevent something far, far worse from emerging.

EDIT, 15 October 6:30a: Since the original publication of this post, an individual inspired by the “#gamergate” movement has sent a threat to Utah State University promising violence against its students if USU went ahead with plans to have Anita Sarkeesian as a speaker today. Sarkeesian herself cancelled the talk, citing insufficient security measures at USU. The fact that actual violence was threatened as a result of the movement’s momentum is troubling, and within back-channels organized for the movement the cancellation is being hailed as a success, which is even more troubling.

A post on the NeoGAF forums describes the inaccuracies and myths that lie at the core of the continued assertion that the “#gamergate” movement is still about specific ethical grievances. As I mentioned yesterday (above), the initial impetus for the movement was resolved within a few days of its revelation. What has continued has been an embarrassment and a shame upon a hobby that has brought so many people together. The threats of violence must stop. The dishonesty about what is going on must stop.

Finally, Kris Straub, author of the webcomic Chainsaw Suit, has posted “The Perfect Crime”, which details in far more brevity the entirety of this post’s ultimate points regarding why self-selected movements are going to be problematic in the future. Like it or not, “#gamergate” was successful in both of its goals– the short-term one (of investigating the ethics concern) and the long-term one (stalking and threatening women). It’s now a model case, a textbook case, for deception in public relations. You can expect this sort of “we’re not saying we condone the actions of the extremists in our movement, but we’re going to accept and claim responsibility for the results that those extremists get us” play to start showing up in issues that really matter.

And it’s the promise of that kind of future, where discussion is intentionally obfuscated and civilized argument is impossible, that terrifies me just as much as any threat to my person.

Winding Down

Another quiet day. With E3 over there’s not much else to talk about until at least the end of the month, when I’ll be looking to kick my plans into high gear. Other than that, I’ll try to keep things steady here.

That said, getting a Twitter reply from the author of the Squid Girl manga was a nice touch to today, so… yeah.

War Changes Pretty Regularly In Point Of Fact

Games Workshop released the seventh edition of the core Warhammer 40,000 rulebook late last month. I haven’t picked it up yet, but I did get a chance to flip through it a couple days ago; overall, there’s not really a whole lot of change in terms of the main rules. Certainly there’s not the massive changes that there were between the fifth edition and sixth, which makes it somewhat strange that GW didn’t choose to release it as an update book as opposed to a full new release. On the other hand, this is GW we’re talking about, so the greed is sort of understandable. The sixth-seventh edition army books are also rather high-priced for their contents; the books also herald new models for those armies, which are also relatively high-priced. It takes a lot to keep up with the game, particularly if you’re trying to maintain multiple armies.

Fortunately for me, however, I’ve been working towards painting up the entirety of what I have, which is a slow process but very, very rewarding. Then again, paint isn’t infinite…


I missed yesterday. That’s a loaded phrase, actually; it was one of the most awesome days for gaming in recent memory, and it was also a day when I should have posted, but didn’t. I of course had other things on my mind. Still, considering that I had meant to have daily posts during my college career, I can afford to miss a few days here and there during the run-up; while I’m trying to get back in the habit of one-a-day, please bear with me.

Aaaaanyway. I was distracted yesterday by some family stuff– nothing too serious, just an ongoing thing– and by E3, which continues today. With the show still going on I don’t want to comment too much, but I would like to say that so far I’m really excited for games that aren’t coming until the end of this year at the earliest. Particularly Splatoon; that was a very nice surprise out of Nintendo. I hope the gamble pays off for them.

Oh, and I had a math placement exam. We shall not speak again of the math placement exam.

FrE3 Association

It’s strange, to me, that I should be so interested in E3 this year when by and large I don’t really have as much passion for video games as I used to. Don’t get me wrong, I’m still a gamer, I just have to be reminded of when the big stuff is happening anymore. For example: I actually forgot that E3 was this coming week. But I think, given everything else going on, I can be forgiven for this lapse.

Anyway. In years past I’ve given predictions for what there’ll be at each of the major booths. This year will, of course, be no different. But it should be noted that I’m going to be glossing over Microsoft’s booth this year, simply because I’m not at all interested in the Xbox One’s offerings, and the 360 has been dropped like the proverbial hot potato. The system isn’t yet a true dud– no system should ever be counted out in its first year– but there’s simply nothing compelling about the machine compared to the PS4 or even the Wii U. (Even the PS4 would be a hard sell for me if not for some extremely good luck earlier this year that landed one in my home… but that’s a different story.) The system is still struggling with its launch jitters, something the PS4 is also dealing with, and so neither of the so-called “true next gen” really warrant more than passing attention. Besides, the two are functionally equivalent anyway, and the concept of an exclusive being a system-seller is laughable in this day and age.

So let’s start with the Wii U. It was just coming into its own at the beginning of this year with Wind Waker HD and Mario 3D World landing, and it was followed closely by Mario Kart 8. All fantastic games, and all strong contenders for the coveted “Not This Shit Again” award from the more cynical in the media. But Nintendo has been nothing if not resilient, and the leak of “Mario Maker” is an intriguing tidbit. The odds are good that it’s a return to the Excitebike-style level-creation tool, sort of a synthesis between Mario and Little Big Planet; but there’s also the long shot that it’s a re-use of the concept of artwork and animation creation tools such as Mario Paint or the 64DD tools. Heck, we don’t even know if it’s a Wii U title. Other returning titles will be the Pokemon fighting game first revealed in glimpses about a year ago, Smash Bros. getting a couple more announced fighters (including Palutena, who was leaked a few months ago), and possibly Mario Kart 8 classic course DLC. New reveals are probably going to be a Metroid game that thematically follows on from Other M, taking the series in a more traditional FPS direction; Planet Puzzle League/Panel de Pon Online for Wii U, an eShop title which will include local multiplayer modes; and a resurrection of an old IP from the 8 or 16 bit days. The million-to-one bet is on a Mother collection. Which will, as always, never happen, but I felt like I should at least continue the tradition of futility.

Let’s stay with Nintendo and hit the 3DS. The portable miracle machine that’s kept Nintendo from sinking into 2001-era Sega levels of desperation has shown no signs of stopping, even if the 2DS has been largely a wash. The true bombshells have already been dropped, being the Pokemon Gen 3 remakes Omega Ruby and Alpha Sapphire. Most of the attention this year will be focused on those two and Smash Bros., so I wouldn’t count on too many first-party surprises. Where things are going to get interesting are integration with Wii U titles, including a possible Pokemon Colosseum game being revealed. Also, I’d expect to see DS titles join the 3DS Virtual Console, leading off with the original Professor Layton games. The sucker bet here is Nintendo disabling the region-locking on the handheld, reversing a four-year-wrong decision.

Sega came up in the last paragraph, so let’s head over there. The Blue Blur’s home isn’t doing too bad for itself, with disasters like Aliens: Colonial Marines finally being put behind them. The multimedia project Sonic Boom is also looking promising, but even if all we get out of it is a season and a half of a decent-enough animated show, it’ll still be a more compelling Sonic property than has been shown in recent memory. We’re almost certainly going to see those titles, and Bayonetta 2, but the list of confirmed titles is pretty slim. I’d put a quarter down on appearances of a new Shinobi title and possibly a new IP. With JRPGs having mostly gone down the toilet, any Phantasy Star title that gets any exposure is likely either going to be a MOBA or a social game, but don’t hold your breath for any of it.

Speaking of toilets, let’s talk Square Enix. FF14’s success has been a much-needed shot of Phoenix Down for the once-struggling company, but that’s been just about the only thing they’ve had going for them over the past year or so. We’re obviously going to see news on the game’s first expansion, which will introduce the Ishgard region and battles against Shiva and Alexander being the primary focus of that pack. SE has quietly been beating the war drums about FF15, with a possible re-reveal trailer being the centerpiece of their booth (as an aside: I remember seeing the ORIGINAL launch trailer for what would eventually become 15, way back in 2006). Expect that one to be a PS4 exclusive; if the Xbox One hasn’t lit any fires over here, it’s DOA in Japan. SE will also be pushing some new titles for the 3DS and Wii U, despite most other companies fleeing from the system: I’d say we’re going to see a new Kingdom Hearts title land on 3DS and possibly a Wii U port of 1.5 Remix. The real surprise here will be a revival of an old, disused IP– likely Saga (remaking the originals for iOS) or, and this is damn near impossible, Mana. Money to burn should go on an HD or Vita re-release of Dissidia.

Sony is also going to have a tough act to follow, with the PS4 being a modest success but not the blow-the-doors-off hit they were expecting. Though, considering that the PS3 hadn’t hit its stride until just about two years ago compared to the all-cylinders-but-petering-out 360, the concept of a slow start isn’t shocking. Sony has always been carried by its third parties, but they’ve had some in-house success with stuff like The Last of Us and Resogun. Expect a lot of the same as in the Xbox One booth, nothing too overwhelming. I’m not supposed to say anything about Playstation Now, the streaming service, as I know someone who’s been in on the beta for a few months; that said, the service should be ready to launch by the end of June or July at the latest, and it’s a fantastic alternative to a game rental service like Gamefly. Playstation Plus is going to get a little bit of focus as well, with the free titles on offer cycling more reliably; I can honestly say that I’m very glad I picked up that particular subscription. Don’t look for too many bombshells here.

As for the remainder of the smaller studios, it’s really hard to say. Atlus has gone all-in with the Shin Megami Tensei series, and it’s about damn time; I’d be surprised if some of the older titles didn’t land on PS Now or some other digital download service in advance of Persona 4 Arena Ultimate. Aksys will probably introduce a next-gen version of BlazBlue, necessitating firmware updates to allow the current-gen fighting sticks to work with the new consoles; there’s also a few quirky titles up their sleeve, as always. Bandai Namco is going to show off more titles in their classic-mascot reboot series (the “Ghostly Adventures” Pac-Man titles), none of which will be interesting to anyone older than about ten. NIS America will… okay, not even an Overlord of the Netherworld knows what they’re going to do, but likely not more Disgaea for at least a year or so; they’re definitely going to focus on Danganronpa 2. EA has their usual spate of sports, Bioware, and garbage. Ubisoft’s announcement of next-generation Tetris has probably skunked any chance of Sega bringing out the excellent Puyo Puyo Tetris in North America, so honestly they can rot for all I care. And Valve will announce a long-awaited third installment in their popular first-person shooter franchise: that’s right, kids, we’re getting Left 4 Dead 3.

Overall, without any major hardware announcements, this is going to be a pretty routine year for E3. I don’t think there will be too many shakeups in the industry; given the rather tepid reception of the new consoles, I’d think that most everyone is going to be playing their future pretty conservatively in order to maintain a long-run profit. The one thing that would absolutely floor me, that would cause me to not shut up about it for weeks, would be if someone, anyone, took a real risk.

Nothing Interesting

So apparently when I said that I would have something interesting enough to blog about every day, I wasn’t counting on days like this where the most exciting thing that happened was The Avengers was on television. Honest. I woke up, took my medication, then went back to bed until 4p. Nice, quiet, boring day.

So I’m going to start looking into an online RPG session just so I have some regular human interaction…

An Inquiry Into Value

One thing I’ve learned about myself over the last few years is that my adage regarding wealth and gaming still holds true: “Video games will get you through a time of no money better than money will get you through a time of no video games.” With the last few months, I’ve also noted a marked difference in how I approach gaming. In less lean times, I’ve prided myself on stockpiling an extensive library of games, old and new, in the belief that I will eventually want to play them again. More often than not, that’s true. However, as bills pile up and income slows to a trickle, I find myself having to sell off some of that library, choosing very carefully what to keep and what to lose.

The advent of digital distribution changed that paradigm somewhat; downloaded games can’t be sold back, which means I’m stuck with them forever. In some cases, though, that’s not a detriment. There are some games which I should not even countenance selling off, usually due to an extensive amount of effort being put into them. (I am, of course, talking about Pokemon X.) The fact that I’m also snagging no-cost titles from services like Playstation Plus and Xbox Live Games With Gold really helps; I was surprised enough by Bioshock Infinite that I kinda regretted not picking it up sooner, to take an example. But one thing remained the same: when my budget could not handle shelling out $60 per month for a new title that might not get played after a week (as much as I loved Bioshock Infinite, being done with it in two days eased the guilt of getting it for free), the comparatively low cost of MMO subscriptions and free-to-play microtransaction-supported games fit more easily in that space.

Let’s start with that last one first: I started playing League of Legends earlier this year during an all-day maintenance for the MMO I’ll get to in a bit. Originally I didn’t intend to put any money down for the game, instead using the in-game currency (Influence Points) to unlock new characters for play. However, as I found myself enjoying the game more and more, I realized that I did want to expand my options somewhat, and as a result I bought the game’s real-money currency (Riot Points). Part of this was also borne of the desire to “pay for” the game: freely-available or not, the level of detail and care shown by the developers, I felt, was compelling enough of a reason to want to give back and “vote with my wallet”. The fact that I really really wanted to play as the gumiho Airi didn’t hurt the reasoning either. When spent on character and skin unlocks, again, there is no going back– stuff is mine forever. But spending on IP gain boosts was also offered, and depending on skill and patience it can be worth it. In the end, for a small amount of money, I was getting hours upon hours of entertainment.

On the flip side, we have MMOs. Shortly after its relaunch, I joined Final Fantasy XIV: A Realm Reborn, and while I still had the opportunity, I had the foresight to pay my subscription as far in advance as I could (six months at a time; I also got a preferential rate for being a subscriber from the game’s disastrous initial launch). Since then, I’ve spent a lot of time in-game, and I’ve enjoyed it as much as I’d have enjoyed any Final Fantasy title. Yes, it is a lot of grinding– more so than even the traditional titles in the series. But an MMO’s grind is made a lot more tolerable when you have friends in-game to chat and compete with. I was very lucky in that I was able to join a close-knit guild, and while I’ve been busier than normal the last few weeks, I’m excited about the new patch which is launching tonight. But most importantly, my presence is missed when I’m not on for a few days, and being greeted upon logging in by friends is something which will literally never get old. So yeah, while I’ve literally maxed out half of the jobs available in the game (and am halfway through doing so on all the others), you’ll never hear me complain about the grind.

Between the two games I’ve probably spent less than half what I would have otherwise on disc-based and single-player games, and even though I’ve picked up a few extreme discounts along the way, those two are still my current go-tos. I think that I’ve gotten the better end of the stick when it comes to value for the money, although it’s impossible to say that it’ll hold true for everyone else. Still, I think it’s a topic worth exploring on your own if you ever find yourself coming up short for the next AAA disc-based game. I’d say that there’s always a chance you might just get more than you pay for with some of these games.