|
Post by galorn on Nov 16, 2010 15:36:59 GMT -5
I had an absolute blast at the GT this year. My opponents were all really awesome guys, the tables were awesome (with one exception. Stupid necron pyramid table grrrr), and My "ranking" was about where I figured It would be. However those things said.
/Start rant My "comp" (for the same list I played at the GT) was "estimated" in the initial E-mail correspondence as around 50. The comp I received at the GT was significantly lower. (I received a comp of 26)
Now if I had received the exact same comp score as that which was my "estimated" comp (50) all things else being equal I would not have a different ranking in the slightest. I'm fine with that. What does confuse and irritate me is that the "estimate" and my final score are significantly different. (24 out of 50 is around a 50% shift)
What caused that?
"Comp" in 40k IMHO is a very tricky thing. Personally I believe anytime a subjective number is applied to "level the field" the penalties and rewards for taking or not taking certain things or combos Should be written down and posted in public
The only truly "fair" way to judge the "comp" of a 40k force is unfortunately a checklist or semi checklist deal. (if Unit x then +/- y points)
What the numbers attached to the penalties or rewards are I don't care. Hell I don't care if a range of possible numbers is assigned to a given checkbox. All I care about is being told about them ahead of time.
/End rant.
|
|
Smitty
Sergeant
Marines die, thats what we're here for, But the Marine Corps lives 4ever & that means YOU live 4ever
Posts: 324
|
Post by Smitty on Nov 16, 2010 16:16:04 GMT -5
Ok I'm not sure I want to open this can of worms but, here goes.. So I'm not sure what list we was yours but we had 25 list that dropped during the final comp review from their estimated score, 16 of which were more than 10 points of swing. I do feel bad about this and wish we had more time to contact those who were affected. The prejudging was handled by 3-4 of us and was hard to compare to other list as we did not have the big picture judging them one at a time. At the time of the final review we had 2 more people helping. We reviewed all the list together, in doing so as a group many things were caught that were not caught in the prejudging having extra eyes on them and just the open discussion of the lists. Seeing all the list at one time gave a bigger picture. This is the only thing I can say happened, right or wrong, this is how it panned out. This will be a process that will be looked at very carefully for next year and hopefully changed for the better.
|
|
|
Post by fishboy on Nov 16, 2010 21:28:56 GMT -5
My goal for this year was to do very bad with an apparently cheesy list. This way I can convince them the list is weak and own all of you next year hehe. Just kidding btw....
|
|
|
Post by galorn on Nov 17, 2010 1:27:42 GMT -5
Ok I'm not sure I want to open this can of worms but, here goes.. So I'm not sure what list we was yours but we had 25 list that dropped during the final comp review from their estimated score, 16 of which were more than 10 points of swing. I do feel bad about this and wish we had more time to contact those who were affected. The prejudging was handled by 3-4 of us and was hard to compare to other list as we did not have the big picture judging them one at a time. At the time of the final review we had 2 more people helping. We reviewed all the list together, in doing so as a group many things were caught that were not caught in the prejudging having extra eyes on them and just the open discussion of the lists. Seeing all the list at one time gave a bigger picture. This is the only thing I can say happened, right or wrong, this is how it panned out. This will be a process that will be looked at very carefully for next year and hopefully changed for the better. Thanks for the reasoned and clear response.
|
|