1. Normalisation will usually get you a set of efficient tables. Ignore the hoary old "Joins are evil" MySQL tradition. Joins are terrifically efficient provided the Join columns are properly indexed.
2. Query performance on tables with millions or billions of rows may be much faster when Blob and Text columns are moved to parallel tables. Low-cardinality indexes, eg on gender, province or state are useless.
3. In the design phase, you develop all the queries that can be anticipated, but only after users have worked with the app for a while will you know which columns are most often searched on. Then you'll probably have to add compound indexes. Such indexes may slow down Insert and Update performance, so you may be faced with a three-way choice: accept slower Insert and Update times, accept slow query performance, or add a maintained, denormalised summary table to the design.
4. A major limitation you may have to work around is that MySQL usually can use just one index per table per query or subquery. Thus, if the primary key is A but the Join column is B, you'll need a compound index on A and B. It's usually best to reference Where columns first, then Join and Select list columns. Index order is significant: index (a,b) helps only with searches on a alone or on a and b, not on b alone or on b then a. As far as possible, design the index to facilitate queries on value ranges rather than spot seeks. It may be necessary to add
5. For subqueries that will read Null values, Explain Extended and Show Warnings show the impact of NULL checks.
6. Tune my.cnf/ini settings carefully for optimal use of memory, caches and optimiser switches.
Last updated 16 Aug 2019