Why is the math "hard"?

After the next playtest packet, the D&D team will be working on the math.

i.e., now that they have gotten feedback from users with explicitly bad or rought math being used, they intend to go over their data, and create a system of math that reflects the 'feel' that people want.

Why do people think this is a hard thing to do?  Why do they think it's bad to "redo the math from scratch"?

Is Excel so old of a progam that people don't use it anymore?  I'm not understanding why it's thought to be difficult to plug in a desired number of rounds per level per combat per class, and get numbers spit back out at you.

Or given a range of HP per class per level, put in a desire number of rounds of survivability and export monster damage.

Someone please explain. 

This seems like something which might take a few hours per itteration.
Then "word merge" the results of the excel table into the rules documents and voila, you have a new gaming sytem... 
Math isn't that hard. WotC sucks at it.

4e has the best math of any edition of D&D to date, but there are still many glaringly obvious math problems in 4e. These problems were pointed out by many forum goers months before 4e went to print. So, I have no trust that WotC can do the math on their own. Especially if they want to keep the "feel" of the game they got to using shoddy non-math.
Math isn't that hard. WotC sucks at it.

4e has the best math of any edition of D&D to date, but there are still many glaringly obvious math problems in 4e. These problems were pointed out by many forum goers months before 4e went to print. So, I have no trust that WotC can do the math on their own. Especially if they want to keep the "feel" of the game they got to using shoddy non-math.



+1

And trying to tack on math at the end of the creation process, seems tacked on.

The math should be the core, the foundation, of the game.  I don't buy an rpg for the 'feel' or the 'fluff' as a DM I bring the 'feel' and 'fluff'  I pay for good solid well worked and playtested math.

 

Remember this is a public forum where people express their opinions assume there is a “In my humble opinion” in front of every post especially mine.  

 

Things you should check out because they are cool, like bow-ties and fezzes.

https://app.roll20.net/home  Roll20 great free virtual table top so you can play with old friends who are far away.

http://donjon.bin.sh/  Donjon has random treasure, maps, pick pocket results, etc.. for every edition of D&D.

The main issue is synergies not the basic cases.   I don't remember the normal attack progressions being an issue until 4e.   

It's the combination of feats/manuevers/spells that can lead to out of bounds issues.

I am sure the game won't be perfect on release.  Course I've never seen a perfect game ever so thats not a big worry of mine.   It will likely be usable which is enough for me. 

My Blog which includes my Hobby Award Winning articles.

Math isn't that hard. WotC sucks at it.

4e has the best math of any edition of D&D to date, but there are still many glaringly obvious math problems in 4e. These problems were pointed out by many forum goers months before 4e went to print. So, I have no trust that WotC can do the math on their own. Especially if they want to keep the "feel" of the game they got to using shoddy non-math.

Which game company does it better?

What is the point of the math if not to elicit a certain feel to the game? 


But this thread is addressed to those people who claim it is hard, and it's matters dramatically when you create, fix, or establish the math. 
Math isn't that hard. WotC sucks at it.

4e has the best math of any edition of D&D to date, but there are still many glaringly obvious math problems in 4e. These problems were pointed out by many forum goers months before 4e went to print. So, I have no trust that WotC can do the math on their own. Especially if they want to keep the "feel" of the game they got to using shoddy non-math.



+1

And trying to tack on math at the end of the creation process, seems tacked on.

The math should be the core, the foundation, of the game.  I don't buy an rpg for the 'feel' or the 'fluff' as a DM I bring the 'feel' and 'fluff'  I pay for good solid well worked and playtested math.

 

Which game did you buy that had this perfect math you speak of?

Can I assume that you agree with the charachterization of 4e as "A mechanic looking for a story?" 
Because nobody can seem to accept the idea that X level Y adventurers going up against X level Y monsters should mathematically get dead half the time.
Instead, the insistence is that X level Y adventurers must be able win against 2X level Y monsters about 99% of the time.  That's why the monster math is completely screwed up.
Because nobody can seem to accept the idea that X level Y adventurers going up against X level Y monsters should mathematically get dead half the time.
Instead, the insistence is that X level Y adventurers must be able win against 2X level Y monsters about 99% of the time.  That's why the monster math is completely screwed up.

If that is what 80% of the people want, why is that a bad thing?
Let's say that you have a simple question, which should have an obvious, mathematically quantifiable answer: how long will a fight between a party of fourth-level characters last against an ogre.

Seems like a nice, discrete, quantifiable question, doesn't it? Just calculate the group's average damage-per-round, based on to-hit scores, and divide the ogre's hit points by that number. Ready, yes?

Only...
How do you arrive at the to-hit number? Would we assume everyone has maxxed out their primary stat? Use a standard array?
What kind of characters are this party? All fighters is going to give me a very different answer than a fighter-thief-wizard-cleric party. And did the thief get a sneak-attack in at the beginning of the fight?
Oh wait, a wizard might be using spells which dramatically change the fight. What is the chance that their Sleep spell works, buying the party a free round of attacks? Oh, and the fighter might haver a pole arm, knocking the ogre prone, totally changing the to-hit rolls. Oh, and is the ogre spreading damage, ot hitting one character, because if the latter, what are the chances one will succumb, changing the party's damage-per-round? Does it matter which character the ogre flattens? Yes it does. And then what chance the cleric could revive that character in time to continue fighting? And is this fight at the beginning of the day so characters are holding on to daily powers, or is it the end of day and they're letting loose? What prevent of which powers would they still have?

And that's one monster. What if the ogre had some goblin minions, multiplying the complexity of the problem.

See? Math is hard.

The main issue is synergies not the basic cases.   I don't remember the normal attack progressions being an issue until 4e.   

It's the combination of feats/manuevers/spells that can lead to out of bounds issues.

I am sure the game won't be perfect on release.  Course I've never seen a perfect game ever so thats not a big worry of mine.   It will likely be usable which is enough for me. 



The normal progression was an issue though. That is why expertise and improved defense feats were introduced (and considered taxes). This is why later scaling of racial attacks needed +3/6/9 instead of +2/4/6. Sadly, that didn't even happen half of the time, even after the designers knew of the scaling issue.

Not to mention the math for HP and damage for monsters was out of whack. That lead to the long 4e combats and the feeling that 4e characters couldn't die. By the MM3 that was mostly fixed with monsters having a lot less HP and doing a lot more damage. But, that is basic math that should have been easy to see.

In 4e, because mosters scaled at +1 to everything per level. That is +1 attacks, defenses, etc. Note: many monsters had +/-5 to attacks and defenses depending on type and role, making them far more varied than any 5e monsters despite the mythical "treadmill".


In contrast 4e PCs with the correct primary stat scaled at about +26 over 30 levels. If the PCs did not have 2 primary stats that they pumped exlusively, they fell even further behind in skill checks, defenses, etc as well.

This math was all easy to fix though had the system gone through more rigorous math testing. They could have easily removed scaling +X bonuses from magic items, removed ability score increases, etc and the numbers would have played out much better. Instead they were left with band-aid fix approaches that still left many issues unresolved (such as the high Str and Con warrior having terrible Dex and Will saves).

The math in 4e was really good from levels 1-6 or so, but started breaking down more and more past then.

Let's say that you have a simple question, which should have an obvious, mathematically quantifiable answer: how long will a fight between a party of fourth-level characters last against an ogre. Seems like a nice, discrete, quantifiable question, doesn't it? Just calculate the group's average damage-per-round, based on to-hit scores, and divide the ogre's hit points by that number. Ready, yes? Only... How do you arrive at the to-hit number? Would we assume everyone has maxxed out their primary stat? Use a standard array? What kind of characters are this party? All fighters is going to give me a very different answer than a fighter-thief-wizard-cleric party. And did the thief get a sneak-attack in at the beginning of the fight? Oh wait, a wizard might be using spells which dramatically change the fight. What is the chance that their Sleep spell works, buying the party a free round of attacks? Oh, and the fighter might haver a pole arm, knocking the ogre prone, totally changing the to-hit rolls. Oh, and is the ogre spreading damage, ot hitting one character, because if the latter, what are the chances one will succumb, changing the party's damage-per-round? Does it matter which character the ogre flattens? Yes it does. And then what chance the cleric could revive that character in time to continue fighting? And is this fight at the beginning of the day so characters are holding on to daily powers, or is it the end of day and they're letting loose? What prevent of which powers would they still have? And that's one monster. What if the ogre had some goblin minions, multiplying the complexity of the problem. See? Math is hard.

I'm not seeing what is difficult about this.

1. We are not asking how long a fight lasts.  And if we are asking how long a fight lasts, it will be with known inputs.  (If they max their main stat, how long does the fight last?  If they use minimum numbers, how long does the fight last?,  If they use average numbers, how long does the fight last?)  That is, I assume through the playtest, that the designers know that a fight gainst an orge should last 4.5 rounds, and now the only question is, what bonus to hit should each class get.

2. You run a for loop 1,000 times and use Math.random() in the excel equation.  All of those variables you asked about are averaged out, and you get your answer.  

Doesn't seem that difficult to me.  (Though granted, I'm not an expert on probability math, but from what I have seen in charOp boards, other people are and have allready crafted the equations needed to figure these things out.) 

Perhaps you have an added complication of trying to run queries against AnyDice.com.
Because nobody can seem to accept the idea that X level Y adventurers going up against X level Y monsters should mathematically get dead half the time.
Instead, the insistence is that X level Y adventurers must be able win against 2X level Y monsters about 99% of the time.  That's why the monster math is completely screwed up.

If that is what 80% of the people want, why is that a bad thing?

Because it's the source of bad math, more so than LFQW.

Because nobody can seem to accept the idea that X level Y adventurers going up against X level Y monsters should mathematically get dead half the time.
Instead, the insistence is that X level Y adventurers must be able win against 2X level Y monsters about 99% of the time.  That's why the monster math is completely screwed up.

If that is what 80% of the people want, why is that a bad thing?

Because it's the source of bad math, more so than LFQW.


Ok, let me ask this another way...

If not enough people will buy the game, thus giving you a game that you have nobody to play with... Why is that math bad?  Why create math that has a party die 50% of the time, if people do not want to die 50% of the time? 
Because nobody can seem to accept the idea that X level Y adventurers going up against X level Y monsters should mathematically get dead half the time.
Instead, the insistence is that X level Y adventurers must be able win against 2X level Y monsters about 99% of the time.  That's why the monster math is completely screwed up.

If that is what 80% of the people want, why is that a bad thing?

Because it's the source of bad math, more so than LFQW.


Ok, let me ask this another way...

If not enough people will buy the game, thus giving you a game that you have nobody to play with... Why is that math bad?  Why create math that has a party die 50% of the time, if people do not want to die 50% of the time? 



I think most people buy the game because it gives them an experience they enjoy, and don't give a rat's ass about analyzing the math. 

Mathematically a party of 4 level 3 PCs would die 50% of the time against a group of 4 level 3 monsters, but thanks to the wonderful aspect of RPGs where you are only limited by your imagination, actual play hardly ever results in that.  I.e., often battles aren't static arena-style fights, just one after the other.  Players can easily do things to swing the odds in their favor via creative thinking and planning.

Burning oil, caltrops, ambushes, scouting, manipulation, etc are all several examples.
Math isn't that hard. WotC sucks at it.

4e has the best math of any edition of D&D to date, but there are still many glaringly obvious math problems in 4e. These problems were pointed out by many forum goers months before 4e went to print. So, I have no trust that WotC can do the math on their own. Especially if they want to keep the "feel" of the game they got to using shoddy non-math.




A lack of trust is the issue then. 



I think most people buy the game because it gives them an experience they enjoy, and don't give a rat's ass about analyzing the math. 



And that is why they ran the playtest they ran.   The game won't rise or sink based upon the math being perfect.  Despite being the worst edition for math issues, 3e (in the form of Pathfinder) is still going strong.

My Blog which includes my Hobby Award Winning articles.

Why create math that has a party die 50% of the time, if people do not want to die 50% of the time? 

It creates math where the party dies half the time, if nobody bothers to tell a DM that it's going to happen.

If a level X party dies half the time to an equal number of level X monsters, use level X-2 monsters instead.

The game won't rise or sink based upon the math being perfect.

Incidentally, "perfect" math is invisble.
When I play a level 1 charachter fighting 5 level 1 rats I want to win 80% of the time, not 0%. But that is just me. If it's a party of 4 I think killing off 15 rats is reasonable. If I'm fighting level 1 goblins, I expect to be able to kill 2 or 3 without much difficulty, or 8-12 if I'm in a party of 4.
I think it's up to the DM to create scenarios where a party wins 90% of the time, regardless of monster strength. The DM just has to know how many HD of monsters that is.

Tornado Party: a roleplaying game theory blog

Systems ran: 2e, 3.5e, nWoD, cWoD, SW (West End)

Systems played: 2e, nWoD, cWoD, MET, Ironclaw, Rifts, Cyberpunk, Shadowrun

No, daganev, those types of variables are not subject to quantification in the way you're suggesting. C.SIM would be the program you would want in any case, but even then, how do you calculate the percent chance that a wizard has "Sleep" in his or her arsenal? What is the % chance a party has a Ranger with the archery feat? And that's one factor - for a single combat, you're talking about hundreds. And that's going to then be run on all the thousand monster/character combos you have? Amass that's just one aspect of the game you're measuring. I am not a mathematician. I have been, previously, a programmer of modeling and simulation software. This is a non-trivial, likely unquantifiable, problem.

I think most people buy the game because it gives them an experience they enjoy, and don't give a rat's ass about analyzing the math. 



And that is why they ran the playtest they ran.   The game won't rise or sink based upon the math being perfect.  Despite being the worst edition for math issues, 3e (in the form of Pathfinder) is still going strong.



I think 1e had some serious math issues too, and it is argueably the most popular version, if you look at it in terms of market share and % increase in player base from year to year.
The rats needed to be level one-quarter.
The problems with monster math become obvious when a party faces off against exact clones of themselves.  What the hell is the EL of that?
Well, what's the math here? If I want to defeat 3x my level 80% of the time, what level does 1x have to be for me to defeat it 50% of the time?
Well, what's the math here? If I want to defeat 3x my level 80% of the time, what level does 1x have to be for me to defeat it 50% of the time?

Doesn't the "Did you just solo Asmo?!" thing from the past week or two fit here?

Yes, well, I never understood why things which you should not be able to kill have hitpoints. That's a separate discussion.
The problem, and why no one ever agrees, is that DMs play monsters differently.  You can take two completely identical encounters, and if one DM just throws them into battle like cannon fodder, while the other DM plays the monsters as as they would act equal to their intelligence and culture, then you can have two very different results.

One group will say, "The math is broken!  They are too easy!"
The other will say, "The math is broken!  They are way too hard!"
That's interesting view, but I'm not sure it's accurate. When looking at the math we are looking at hudreds of iterations not just one tables' experience
No, daganev, you are looking at *millions* of combinations, with absolutely no way to determine which of them are most likely, each of them subject to thousands and thousands of external, non-controllable factors. And that's just to get a ballpark, "average" situational result. If you actually want to be able to accomodate thr range of variables, you're exponentially increasing the complexity of ther problem.

It's simply not subject to mathematical analysis. "Feel" is as close as you will ever get, and that only for a subset of your audience.
That's interesting view, but I'm not sure it's accurate. When looking at the math we are looking at hudreds of iterations not just one tables' experience



Well, it is sort of accurate.  Each table plays differently.  DMs and players don't follow the same exact test scripts when running a game, so it's incredibly difficult to try to "balance" the math.  The combat encounter is never going to be in a static, sterile environment where the monsters will always follow the exact same chain of actions and the PCs all do the same actions in the same order.

Combat encounters take place in environments, which they themselves differ about as much as a person's imagination allows.  All these environmental factors are not insignificant.  That's why these discussions often quickly turn into worthless arguments.  Talking about balance in general is a good thing, but if you're finding yourself over analyzing it to the point where you're completely forgetting that D&D is not taking place in a sterile environment with no other factors, then it no longer matters because that's not how the game is typically played.

And it's not just one table.  It's not unreasonable to say that a large percentage of DMs will treat monsters like stat blocks only to be used in a meat grinder, while a large percentage of DMs will treat their monsters with cunning, planning, and intelligence.  And then a much larger percentage of DMs who fall in the middle.

Because we always will have differing styles of play, someone will always think the "math" fails.  At some point you just go with what feels the best for the larger segment of players.
Well, the first round of design is based around "feel" - stuff like, "a d4 feels dinky so we're starting rogues with a d6 expertise die," or, "advantage on attacks is fun so we're making that part of barbarian rage." It's not worth revamping the skill DCs or expected damage for the whole packet every time they try something like that, though, because it might get shot down for design reasons. Plus, if you, say, raise default skill DCs in the same packet you give rogues skill dice, that might seem less like a cool new feature for rogues than a nerd to everyone else.

Plus, changing one thing at a time isolates variables in the feedback. If, say, they had doubled monster hp in this packet, they might have seen a lot of people suddenly more unhappy with fighters not because they'd done something "wrong" with that class, but because all the sudden the mage SoD effects are relatively much more powerful and fighter players feel silly chipping away at big hp pools instead of just polymorphing bad guys. So instead they get some useful feedback on the changes to the fighter class now, and they can revisit class balance issues when they do "fix the math" next packet.
No, daganev, you are looking at *millions* of combinations, with absolutely no way to determine which of them are most likely, each of them subject to thousands and thousands of external, non-controllable factors. And that's just to get a ballpark, "average" situational result. If you actually want to be able to accomodate thr range of variables, you're exponentially increasing the complexity of ther problem. It's simply not subject to mathematical analysis. "Feel" is as close as you will ever get, and that only for a subset of your audience.




I agree.
The best overall method for balancing the math in a game like D&D (and not even that is 100% perfect) is keep numbers into a lower, contained range; and also avoiding as much as possible "synergy" between ability, feats, etc.

A little of that synergy is OK, but ideally every ability, feat or character option should be a thing of his own, have it's own independent use, or offer a variant use of a regular rule. Also, making abilities that aren't math-related. The opposite, abilities that add up into combos inevitably lead to exaggerated inflation of numbers at some point (at least with some character "builds") especially when suplement books come into play.

--> Weapon Finesse, as in 3E, was one such good ability design. It can't screw the math. You couldn't get "more" from it than you would get normally from your Str score. It just changed what you used for the TH roll.
--> The "Track" feat is also interesting. It doesn't mess with the math at all, while offering a relevant and unmeasurable new ability for the character.
--> Power Attack, however, can severely mess up with number inflation. Maybe not by itself, but when you add more and more feats that raise your damage and stack with each other, this can become a problem.



2E AD&D, imo, had the "best math" of all D&Ds by far.
And that probably because character progression wasn't too based in "stacking bonuses" like in newer editions.
It had its issues too, yes, some overpowered spells and abilities, but less, and usually those issues weren't in the form of "math umbalance".

No, daganev, you are looking at *millions* of combinations, with absolutely no way to determine which of them are most likely, each of them subject to thousands and thousands of external, non-controllable factors. And that's just to get a ballpark, "average" situational result. If you actually want to be able to accomodate thr range of variables, you're exponentially increasing the complexity of ther problem.

It's simply not subject to mathematical analysis. "Feel" is as close as you will ever get, and that only for a subset of your audience.

This is not correct.
There might be millions of combinations , but only a small subset of those combinations are actually "interesting".

There is a maximum amount of damage that is possible for each class at each level.  There is also a minimum amount of damage possible.  There is also a statiscially probable amount of damage done at each level for each class.   Those are the only 3 combinations you need to worry about.

If giant pieces of software can be unit tested, then so can a TTRPG.


Deciding which math is "correct" is difficult.  And WOTC got about 100,000 data points to figure that out.  They claim they now know what the correct math should look like.  Now they just need to iterate it, and make sure it's correct.
 
That's interesting view, but I'm not sure it's accurate. When looking at the math we are looking at hudreds of iterations not just one tables' experience



Well, it is sort of accurate.  Each table plays differently.  DMs and players don't follow the same exact test scripts when running a game, so it's incredibly difficult to try to "balance" the math.  The combat encounter is never going to be in a static, sterile environment where the monsters will always follow the exact same chain of actions and the PCs all do the same actions in the same order.

Combat encounters take place in environments, which they themselves differ about as much as a person's imagination allows.  All these environmental factors are not insignificant.  That's why these discussions often quickly turn into worthless arguments.  Talking about balance in general is a good thing, but if you're finding yourself over analyzing it to the point where you're completely forgetting that D&D is not taking place in a sterile environment with no other factors, then it no longer matters because that's not how the game is typically played.

And it's not just one table.  It's not unreasonable to say that a large percentage of DMs will treat monsters like stat blocks only to be used in a meat grinder, while a large percentage of DMs will treat their monsters with cunning, planning, and intelligence.  And then a much larger percentage of DMs who fall in the middle.

Because we always will have differing styles of play, someone will always think the "math" fails.  At some point you just go with what feels the best for the larger segment of players.

I'm not aware of a charop board accepting ancedotal evidence as being persuasive in deciding if something is overpowered or not.  I'm pretty sure they look at the mathematical averages.   Perhaps I'm wrong on this though, and you can provide some examples where the differences between tables matterd to those who focus on the math of the game?
I'm not aware of a charop board accepting ancedotal evidence as being persuasive in deciding if something is overpowered or not.  I'm pretty sure they look at the mathematical averages.   Perhaps I'm wrong on this though, and you can provide some examples where the differences between tables matterd to those who focus on the math of the game?



It should matter because RPGs aren't played in a vacuum where every group is running into the same mathematical problems for the same encounter.

I don't know maybe people do play like that, but certainly not most.  I feel I've already given a pretty good example of what I'm talking about here.

if one table has a DM that just throws a bunch of orcs at the PCs as cannon fodder, then the players might think the math is broken because a like level party was able to defeat them very easily.  If the other table has a DM who plays the orcs to where they lure the party, plan ambushes, attack in formations, and target the PCs strategically, then you can have players who think the math is broken because they had a TPK too easily.  The core math isn't any different from one table to the next.

The non-measureable factors like role-playing impact a scenario just as much as, if not more, than the flatline math itself.

The other critical error I see with the math obsessed is that they often forget or leave out other very important factors that impact the overall effectivness.  For example, I've heard from several people that taking a +2 bonus to DEX for a rogue is always better than taking an alertness feat.  They are only looking at a +1 modifier on one side vs. a +5 initiative on the other.  They are completely ignoring or forgetting that other aspects of the game can totally change the dynamic of which is better.  In this case, a rogue gets advantage on any attack vs a target creature they go before in the round.  Any time the rogue has advantage, he adds sneak attack damage.  so in this case, that +5 initiative has a much greater impact than just a flat +5 initiative.  It greatly increases your chance to hit, doubles your chance for a critical hit (since it uses advantage), and significantly adds damage to that attack.

 
This is not correct. There might be millions of combinations ,but only a small subset of those combinations are actually "interesting".

There is a maximum amount of damage that is possible for each class at each level. There is also a minimum amount of damage possible. There is also a statiscially probable amount of damage done at each level for each class. Those are the only 3 combinations you need to worry about.

If giant pieces of software can be unit tested,then so can a TTRPG.

No, daganev. You can't pre determine what people will consider "interesting." There is no amount of max/min damage numbers which will account for, say, someone rolling a 20 on a persuasion check and getting the ogre to lay down its arms. There is no mathematically quantifiable variable which is going to cover a DM's decision to grant advantage to the party's cleric on a spell check because she role-played her incantation. And your analogy about a piece of software reveals how badly you're misunderstanding the issues here. A chunk of code has a predefined purpose, measurable efficiency at achieving that purpose, and an absolutely limited field of variables and functions. A D&D encounter does not, and it is no more subject to objective mathematical analysis than a Shakespeare play. Sure, you can use your calculator to tell you how many words the play has, and measure a thousand productions to see the average runtime. That won't tell you whether the characters' actions fit their motivations, or the "most efficient" way to block a scene.

Sorry.
This is not correct.
There might be millions of combinations , but only a small subset of those combinations are actually "interesting".

There is a maximum amount of damage that is possible for each class at each level.  There is also a minimum amount of damage possible.  There is also a statiscially probable amount of damage done at each level for each class.   Those are the only 3 combinations you need to worry about.

If giant pieces of software can be unit tested, then so can a TTRPG.
 




You are not correct.

My level 3 character deals 1d8+2 damage.
So his minimum damage is 3 and his maximum is 10, right?
No.

--> Now my character notices a loose supporting beam on the tunnel. He pushes it, making the tunnel collapse on top of the Ogre.

--> On the next fight, another Ogre with the same stats attacks us. This time my character decides his action will be trying to open the locked door while the rest of the party fights with the Ogre. 

Those are only 2 random scenarios I picked. The number of actual possible scenarios in an RPG are infinite.



I'm not saying rules should be made at random for an RPG. Common sense, testing, and reviewing the math is important and necessary. But there is only so much you can stress-test a game that is based on full creativety of narrative. It is nothing at all like testing a software or video-game.

In a video-game you have a finite number of possibilities of what a player can do at every moment. The player can only "collapse the tunnel on top of the Ogre" if the option to push the loose beam was made available for him beforehand, devised by the game's designers.


 
There is no mathematically quantifiable variable which is going to cover a DM's decision to grant advantage to the party's cleric on a spell check because she role-played her incantation.

At the very least, the willingness of any particular DM to grant Dis/Advantage for any non-codified reason will alter the importance of any codified mechanic that could provide the same, since they don't stack. I'm not certain that there's any way to account for that.

The metagame is not the game.

This is not correct. There might be millions of combinations ,but only a small subset of those combinations are actually "interesting". There is a maximum amount of damage that is possible for each class at each level. There is also a minimum amount of damage possible. There is also a statiscially probable amount of damage done at each level for each class. Those are the only 3 combinations you need to worry about. If giant pieces of software can be unit tested,then so can a TTRPG.

No, daganev. You can't pre determine what people will consider "interesting." There is no amount of max/min damage numbers which will account for, say, someone rolling a 20 on a persuasion check and getting the ogre to lay down its arms. There is no mathematically quantifiable variable which is going to cover a DM's decision to grant advantage to the party's cleric on a spell check because she role-played her incantation. And your analogy about a piece of software reveals how badly you're misunderstanding the issues here. A chunk of code has a predefined purpose, measurable efficiency at achieving that purpose, and an absolutely limited field of variables and functions. A D&D encounter does not, and it is no more subject to objective mathematical analysis than a Shakespeare play. Sure, you can use your calculator to tell you how many words the play has, and measure a thousand productions to see the average runtime. That won't tell you whether the characters' actions fit their motivations, or the "most efficient" way to block a scene. Sorry.




+ a million
Wotc needs to get their heads around where they want their base hit ratio

lets say that default hit rat should be 60%, (9+ on d20) for monsters and players of any given level

lets say that is for main attack of attackers class(melee, ranged, spell-depends on class). secondary attacks are with 1 or 2 lesser attack bonus and tertiary are with 3 or 4 lesser bonus.

lets say that "brute class" of mobs ha default AC hit ratio. they have strong melee DPS.

Soldiers(tanks) have AC 2-3 pts higher, 45-50% hit rate.

ranged strikers have AC 1-2 pts lower, 65-70% hit rate.

casters(AoE, special disables, SoDs) have AC 3-4 lower, 75-80% hit rate.


math shouldn't be that hard.

and If we put +1/4 bonus per level to primary attacks/saves/AC bounded accuracy is still here.

This is not correct.
There might be millions of combinations , but only a small subset of those combinations are actually "interesting".

There is a maximum amount of damage that is possible for each class at each level.  There is also a minimum amount of damage possible.  There is also a statiscially probable amount of damage done at each level for each class.   Those are the only 3 combinations you need to worry about.

If giant pieces of software can be unit tested, then so can a TTRPG.
 




You are not correct.

My level 3 character deals 1d8+2 damage.
So his minimum damage is 3 and his maximum is 10, right?
No.

--> Now my character notices a loose supporting beam on the tunnel. He pushes it, making the tunnel collapse on top of the Ogre.

--> On the next fight, another Ogre with the same stats attacks us. This time my character decides his action will be trying to open the locked door while the rest of the party fights with the Ogre. 

Those are only 2 random scenarios I picked. The number of actual possible scenarios in an RPG are infinite.



I'm not saying rules should be made at random for an RPG. Common sense, testing, and reviewing the math is important and necessary. But there is only so much you can stress-test a game that is based on full creativety of narrative. It is nothing at all like testing a software or video-game.

In a video-game you have a finite number of possibilities of what a player can do at every moment. The player can only "collapse the tunnel on top of the Ogre" if the option to push the loose beam was made available for him beforehand, devised by the game's designers.


 

If your charachter does 1d8+2 damage, your minimum damage is 0 and your Maximum damage is 10 + any other bonuses the game gives out to you.

But we don't really care what one charachter's min or max damage is, we care what the min, max, and average damage is for a class.

Whether or not an option is "interesting" has nothing to do with personal choice.  If one option returns a math value of 1.0001 and the other returns a math value of 1.00003  neither of those equations are more interesting that the equation which returns a math value of 1.

Only a subset of options are mathematically interesting.   If you drop a tunnel on an ogre's head, or get an Ogre stuck in a trap, those are not mathematically interesting options either.   Those will be outside the math.

What is mathematically interesting, is how often are you able to destroy a tunnel or set a trap.  But those numbers have nothing to do with monster HP, level or to hit bonuses.

 

Not necessarily outside the math.

I may, for example, want to invest on my character being good at Knowledge about mining, engineering, dungeoneering, etc, instead of him having "awesome blows" or "awsome spells" for combat, and fight my foes indirectly by using what's available in the scenario, like in the loose-beam example. I'm investing character "points", skills, feats or whatever for that... and boosting the math on the rolls I want to make relative to those skills (assuming these are treated as skills). 
Sign In to post comments