Wata Crossovers

By Jonas McCammon

 

 

Foreword: We are pleased to share with you the first of many collector and community sourced articles to come. We thank Jonas for his contributions and work to put together this educational and resourceful, data-driven piece. Please note that any opinions or statements in this article are not necessarily shared by Wata. That being said, we always love to hear from the community no matter what side of the fence you stand on, and further encourage any of you who may have expertise in a certain area, insights in the market, or general opinions on the state of the collecting hobby to share your thoughts with us. Who knows, we just may publish it to share with the whole community! We have more collector articles in the pipeline so be sure to stay tuned…

 

 

Introduction

As a sealed game collector, condition is king and higher grades command higher prices. With multiple game grading companies in the market, it is natural to wonder how games could crossover between various companies. This article examines these crossovers in detail.

First, let’s understand the data set as compiled. In writing this article I reached out to various trusted collectors for their first-hand experience with grades received on crossover games. It is important to distinguish between first hand feedback from the submitter vs. any hearsay that could be embellished or accidentally misquoted. Wata was also able to provide a significant amount of the Carolina Collection data points directly. In total, 335 pieces of data were analyzed.

Before getting to the detailed analysis, let’s address some of the limitations. This data is for sealed games only and extremely biased towards NES and 85+, as those were the primary focus of the Carolina Collection. There are a few other platforms represented including SNES, N64, Gameboy, Genesis and PS1, but NES comprises over 85% of the data set. Analysis has been provided on the 80, 80+, 85, 85+ and 90 levels, though sample sizes > 30 data points for each tier would be desired to establish statistical significance. The 80 and 80+ levels would likely show more consistent trends with more data points.

Another important consideration is that grading is inherently subjective, and a company or individual may assess damage differently. Major and frequently occurring flaws, such as creases, wrap tears, corner pokes, etc. are easier to quantify. Less frequently occurring flaws, such as humidity damage (warping) and sun damage are more subjective. Finally, some topics such as manufacturing defects or aftermarket stickers may or may not be viewed as a flaw depending on the individual. Thus the same game can be graded completely different on two separate scales based on prioritization and classification of defects within the grading algorithm.

Raw Data

Now, onto the analysis! First let’s just examine the raw data for each grade:

 

At the 80 level, grades have ranged from 7.0/B to 9.6/A, with the most frequently occurring grade of 9.0/A. This is based on a small sample set of 17 data points, thus generalizations may not be statistically significant. The 7.0/B and 9.6/A grade could be potential outliers and may have been undergraded, overgraded or possess unique condition flaws that are subjective in nature. They could warrant further analysis. The “sweet spot” appears to be in the 8.5/B+ to 9.2/A range.

 

At the 80+ level, grades have ranged from 7.0/A to 9.4/A+, with the most frequently occurring grade of 9.4/A. This is based on a small sample set of 20 data points, thus generalizations may not be statistically significant. The 7.0/A grade could be a potential outlier that would warrant further analysis. The “sweet spot” appears to be in the 9.0/A to 9.4/A+ range.

When compared to the 80 grades, 80+ games appear to have nicer wrap. The 80+ games appear to be A to A+ borderline on average, while 80 games are more in the B+ to A seal range.

At the 85 level, grades have ranged from 7.0/A to 9.8/A+, with the most frequently occurring grades of 9.4/A+ and 9.6/A+. This is based on a sufficiently sized data set to bolster the analysis. The 7.0/A grade appears to be a potential outlier that would warrant further analysis. The 9.8/A+ grade has a significant sticker that may have been penalized more harshly on another grading scale. The “sweet spot” appears to be in the 9.0/A to 9.6/A+ range.Table 3 – 85 Crossover Data


At the 85+ level, grades have ranged from 8.0/A+ to 9.8/A++, with the most frequently occurring grades of 9.4/A+ and 9.6/A+. This is based on a comparatively large data set to allow for meaningful conclusions. Potential outliers include 9.8/A++ and 8.5/B+. The “sweet spot” appears to be in the 9.2/A to 9.6/A++ range.

When compared to the 85 grades, 85+ games appear to have nicer wrap. At this level A++ seal grades appear much more frequently. 85+ grades appear to have much more upside than 85 grades as well.


At the 90 level, grades have ranged from 9.4/A to 9.8/A++, with the most frequently occurring grades of 9.8/A++ and 9.4/A+. This is based on a moderate sample set of 34 data points, providing a bit better of a trend than the 80 to 80+ analysis. The “sweet spot” appears to be in the 9.4/A+ to 9.8/A++ range.  

When compared to 85+ grades, 90 games are even nicer with more A++ seal grade and 9.8 box grades as a percentage of their total.

Data Normalization

Unfortunately, the crossover data varies significantly across and within grades. While the tables key in on the “sweet spots”, outliers or differing opinions cause significant variability. The next steps are an attempt to normalize the data.

On the Wata scale, a 10.0/A++ game is the top of the line, but what is better, a 9.8/A++ or 10.0/A+ grade? They are both one step removed from a perfect grade and should theoretically be valued about the same. Same argument applies to 10.0/A, 9.8/A+ and 9.6/A++, all two steps removed from 10.0/A++ and a matter of preference as to which is better.

Using this line of thinking, I have defined notional grade bands as I perceive them under the Wata scale. It looks as follows:

 

By mapping the two variables of Box / Seal to a one variable text description ranging from Good- to Gem Mint, it allows a better “apples-to-apples” comparison across both scales. When doing this, the raw data maps this way:

                  Now the picture becomes much more clear! The 80 and 80+ grades range from Good to Near Mint, though a larger data set would allow for more accurate generalizations. The 85 and 85+ grades are typically in the Near Mint range, though 85 has a lower floor to Very Good and 85+ has a higher ceiling to Near Mint / Mint. The 90 grade appears to be solidly in the Near Mint to Mint range, but more data could potentially enter the Excellent range too. Either way, this table proves the correlation that nicer games tend to cross over higher, though any individual item could be a potential outlier due to unusual wear or improper categorization.  

Conclusions

So what are the key takeaways from this analysis? While variability will exist within any given grade due to treating the box and seal separately, these seem to be the “sweet spots” for each crossover. Further analysis could paint an even better picture.

 

80 – 8.5 / B+ to 9.2 / A.

80+ – 9.0 / A to 9.4 / A+.

85 – 9.0 / A to 9.6 / A+.

85+ – 9.2 / A to 9.6 / A++.

90 – 9.4 / A+ to 9.8 / A++.

 

Ultimately, my words of wisdom are to “buy the game not the grade”. Grade matters as the unbiased expert opinion, but if you plan to collect video games, then you should work on establishing an eye for condition. While it is hard to properly assess condition once a game is already slabbed, you can still gain a better understanding of thoroughly examining photos. Good luck with your collecting!