I'm gonna go ahead a spitball here for a bit.

If you look at a game of Magic in terms of a complex multivariable, time-variant, and probabilistic system, hypothetically you could simulate this system via controls design.

Your "plant" in this system would be the design of the deck you are using. The input would be the various card choices that you make during the course of the game over a period of N turns, and the overall desired output would be some metric evaluating the successful completion of a game otherwise known as "winning the game". The metric could be the amount of potential damage per turn (or some other metric based on the style of deck being played: control, combo, aggro, etc.), with the desired output response having a certain percent overshoot of X% (here percent overshoot would be amount of overkill damage you could deal over a certain period of turns in order to make the resistance from your opponent negligible) and the settling time would be considered the number of turns it would take to cause your opponent's life total to go to zero. Steady-state error could be considered how close the deck's output is to the desired output. Obviously there is more complexity here, but you get the gist of it. The controls portion could be considered your sideboard options for the deck, or just changes to the deck that would take into account the resistance from your opponent, and the probabilistic error of the plant and improve the output to closely match the desired output that you want to achieve.

Most people would see the output has the number of turns it would take to kill the opposing player assuming no resistance or disturbance from that opponent, otherwise known as "goldfishing", but in standard play the number of "disturbances" can be narrowed down to a select amount of cards given the current format and meta (really this could be used for any format, it would just be harder for larger formats with more card choices). You could theoretically create a mathematical representation of the entire system taking into account these disturbances, account for error (probabilities that you don't draw into your core strategy, or draw into answers to opposing threats) and the overall frequency data (on average how long it takes for your deck to win in varying circumstances) and through this simulation you could find an optimal deck based on the given circumstances.

Some of the challenges with this are:

-Developing the system schematic that closely captures reality in an appropriate timeframe (as metas change dramatically from week to week and from set release to set release)

-Taking into account all variables and states of the system correctly

-Understanding that when you change your design from one iteration to the next, you might solve one issue the deck had, but you will invariably create new problems for the deck that it didn't have before you made a change to either the deck composition itself or the sideboard of the deck under scrutiny

I think that applying controls theory and systems design concepts to the design of MTG Decks would be an interesting application to pursue, but I'm not sure it would be entirely practical. A typical Magic player probably already does this to a certain extent just through experimentation and iterative design changes, and this would potentially get you similar results with more effort and time (depending on how well you understand the application).

If you look at a game of Magic in terms of a complex multivariable, time-variant, and probabilistic system, hypothetically you could simulate this system via controls design.

Your "plant" in this system would be the design of the deck you are using. The input would be the various card choices that you make during the course of the game over a period of N turns, and the overall desired output would be some metric evaluating the successful completion of a game otherwise known as "winning the game". The metric could be the amount of potential damage per turn (or some other metric based on the style of deck being played: control, combo, aggro, etc.), with the desired output response having a certain percent overshoot of X% (here percent overshoot would be amount of overkill damage you could deal over a certain period of turns in order to make the resistance from your opponent negligible) and the settling time would be considered the number of turns it would take to cause your opponent's life total to go to zero. Steady-state error could be considered how close the deck's output is to the desired output. Obviously there is more complexity here, but you get the gist of it. The controls portion could be considered your sideboard options for the deck, or just changes to the deck that would take into account the resistance from your opponent, and the probabilistic error of the plant and improve the output to closely match the desired output that you want to achieve.

Most people would see the output has the number of turns it would take to kill the opposing player assuming no resistance or disturbance from that opponent, otherwise known as "goldfishing", but in standard play the number of "disturbances" can be narrowed down to a select amount of cards given the current format and meta (really this could be used for any format, it would just be harder for larger formats with more card choices). You could theoretically create a mathematical representation of the entire system taking into account these disturbances, account for error (probabilities that you don't draw into your core strategy, or draw into answers to opposing threats) and the overall frequency data (on average how long it takes for your deck to win in varying circumstances) and through this simulation you could find an optimal deck based on the given circumstances.

Some of the challenges with this are:

-Developing the system schematic that closely captures reality in an appropriate timeframe (as metas change dramatically from week to week and from set release to set release)

-Taking into account all variables and states of the system correctly

-Understanding that when you change your design from one iteration to the next, you might solve one issue the deck had, but you will invariably create new problems for the deck that it didn't have before you made a change to either the deck composition itself or the sideboard of the deck under scrutiny

I think that applying controls theory and systems design concepts to the design of MTG Decks would be an interesting application to pursue, but I'm not sure it would be entirely practical. A typical Magic player probably already does this to a certain extent just through experimentation and iterative design changes, and this would potentially get you similar results with more effort and time (depending on how well you understand the application).