With SSA form, you can ignore those and get a similar level of precision.
Memory usage: Without SSA form, an analysis must store information for every variable at every program point.
You can write compilers without, but for most optimizations you'll need to convert to SSA. what makes SSA so special for compiler construction It makes doing immutability and loop invariant reasoning a ton easier, along with constant propagation: https://en.wikipedia.org/wiki/Static_single_assignment_form More generally, SSA means implicit use/def information.
Use/def information and traversal are critical components of many static analyses and transformations: having the IR in SSA form greatly simplifies reasoning about and implementing these analyses and transformations.
In SSA form, you only need on value, not a list of values, for each variable.
You can also number single-assignment variables by number, rather than by name, and so use simple arrays rather than hashtables to store the results.
I'm mostly wondering whether there are any non-obvious "gotchas" that aren't apparent to me as a non-expert.1.
a well-regarded folklore algorithm, although that paper is the first publication with a detailed analysis.
SSA form allows a sparse analysis, where an analysis must store information only for every assignment in the program.
With a unique version per assignment, the memory usage of storing the results of an analysis can be considerably lower than using bit-vector or set-based approaches.