In reference to computers, a macro (which stands for "macroinstruction") is a programmable pattern which translates a certain sequence of input into a preset sequence of output. Macros can be used to make certain tasks less repetitive by representing a complicated sequence of keystrokes, mouse movements, commands, or other types of input.
In computer programming, macros are a tool which allow a developer to re-use code. For instance, in the C programming language, this is an example of a simple macro definition which incorporates arguments:
#define square(x) ((x) * (x))
After being defined like this, our macro can be used in the code body to find the square of a number. When the code is preprocessed before compilation, the macro will be expanded each time it occurs. For instance, using our macro like this:
int num = square(5);
is the same as writing:
int num = ((5) * (5));
Note that a macro is not the same as a function: functions require special instructions and computational overhead to safely pass arguments and return values; a macro is a way to repeat frequently-used lines of code. In some simple cases, using a macro instead of a function can improve performance by requiring fewer instructions and system resources to execute.