A lot of clinical trial issues are pretty fundamental statistics. There are some specific weird things that tend to come up in trials, and not in other places (e.g. compliance with treatment). This book describes a lot of those issues in clinical trials, and it's fairly short.
https://www.amazon.com/Designing-Randomised-Trials-Education-Sciences/dp/0230537359
Disclaimer: I used to work with the authors, and have published papers with them. But I bought the book, with my own money.
You can do a significance test. But 1 in 20 randomization checks will be statistically significant. So if your test is significant, you don't know if randomization failed, or if you were unlucky.
And if a randomization check is not significant, that doesn't tell you much. It just tells you that you failed to detect failures of randomization.
So don't try. Design robust randomization procedures that you believe in (using telephone randomization, opaque envelopes, whatever). If you don't have complete faith in your randomization, you don't believe it, no matter what.
Link (behind a paywall): http://www.bmj.com/content/319/7203/185.1.full I think the clinical trials book by Torgerson and Torgerson covers this: https://www.amazon.com/Designing-Randomised-Trials-Education-Sciences/dp/0230537359 (But so should any decent book on clinical trials).