Consider the book "Understanding Variation". This is helpful to know what to do if you are expected to respond to variation (perhaps a medical intervention of some sort). The goal is to avoid errors associated with overreaction or underreaction to variation. Sometimes the variation is ordinary randomness and no reaction is necessary and sometimes the variation is large enough that a response is appropriate. The book describes using control charts and control limits to know the difference.
If your normal random variation includes both disease activity and machine variation, then you'll need to establish a baseline level of variation that has only the components that do not need a clinical reaction. Maybe get a measure of random variation using a patient that has no disease. Establish control limits (they are typically six standard deviations wide), then measure the patient having disease. You'd then react to individual readings that are in excess of the control limits.
Not evasive or frustrating but I hope you know that there is a ton of interesting science being done by politically conservative people. The only thing that disappoints me is your analysis.
If you are actually interested in understanding your data I seriously recommend you read Understanding Variation by Donald Wheeler. Crazy title and the cover looks like nonsense even to super-lefty me, but it's a great book that will really help you with statistical analysis. This book grounded me to the data and cut my idealism off at the knees. I'd be interested to see what it does for you.
The purpose of control charts is to aid the engineer in establishing a state of statistical control. When there are indications that the process is not in control, then the engineer will identify and eliminate the special causes of variation. After a few iterations of continuous improvement and charting, and maintaining the elimination of special causes, the process is presumably in a state of statistical control. This means that the remaining variation is that of a constant system of chance causes.
The control chart tells the story of the state of control because the engineer chooses a rational strategy for sub-grouping the data. Variation between subgroups is compared to variation within subgroups. When variation within subgroups is out of control, then certain corrective actions are typical. When variation between subgroups is out of control, other types of corrective actions are appropriate.
A rational subgrouping organizes the data so the minimum natural amount of variation is within subgroups. In a state of statistical control, there are two things we can say about the variation: all the subgroups have the same variation, and variation between subgroups is governed by the same system of causes as that within subgroups.
Control charts are to be applied so that out of control conditions are identified immediately. At that time the special causes of variation are easier to identify.
I can't tell from your description what the subgrouping strategy is nor why it would be considered rational. Tuning the sampling and subgrouping to make it maximize the ability to detect spikes isn't what's meant by rational subgrouping. Rational is aligning the subgroups so there is only common-cause variation within groups. If that's the subgrouping, then the variation is the least possiblegiven the current process. WIth that kind of subgrouping the out-of-control conditions will become apparent. The subgrouping strategy should include data that is up to the minute, so there's no reason to ignore the most recent information.
Is the data subgrouped in a way that minimizes variation within groups? If control charts go back a whole year, then presumably you think the variation within subgroups has been constant for a year. Does the range chart show that? It would be fantastic for your process if that's the case.
If the range chart is in control then all the causes of spikes, shifts and trends are happening between subgroups. But assuming that after each out-of-control condition is detected you go through an effort to eliminate the cause, then the process has changed and it's time to calculate new control limits.
Here are the books you seek:
<em>Understanding Variation</em> by Wheeler is a great starting point.
The details are in <em>Understanding Statistical Process Control</em>.
The principles were developed in the 1930's in manufacturing. It's because the principles of variation were developing at the same time as the principles of hypothesis testing. The early authors needed ways to understand unusual sources of variation without assuming any kind of distribution to the variation. This remains a useful idea.
Wheeler took the principles and applied them to things beyond manufacturing.
Try "Understanding Variation" by Wheeler. Then "Understanding Statistical Process Control".
Perhaps there are learning guides or videos on the topic, but Donald Wheeler is a critical source, so any learning references should be traceable to him. Of course he links back to Deming and Shewhart.
It's good that you've already realized how easy it is to lie with statistics!
Donald Wheeler - Understanding Variation https://www.amazon.com/Understanding-Variation-Key-Managing-Chaos/dp/0945320531
(It's short and extremely valuable. Grasping variation is key, and many stats people I fear still don't get it)
For stats, check out Modern Dive (free online): https://moderndive.com/
It's great you want to understand stats, since you mention psychology though I want to say I think psychology often does not have enough empathy. Behind any numbers you see in psychology are real people, with real lives and real pain/problems. Need to have a human/empathetic point of view!
As everybody else here says in different ways, do these three steps 1: find out the business problems people are trying to solve by going to where they work and sitting down with them. LITERALLY where they work. The gemba, as it's called 2: this will tell you the decisions they need to make. 3: this will tell you the data they need to make these decisions. A handy heuristic, once you've started providing data ask the recipients what decisions they COULDN'T make if they stopped getting it. If they couldn't think of any, you're providing the wrong data.
Have a read of what people have found before you. They've written it up so you can learn it quicker than they did. I'd define myself as a systems thinker, and so what I'd recommend is skewed towards that way of thinking This guy is good and has a book coming out soon specifically about how data can provide value when analysed properly. https://www.leanblog.org/tag/process-behavior-charts/ This guy writes well about something most analysts have never heard of it, it's worth an hour of your time and is invaluable. Trust me, browse it. There's HUGE amounts of brilliant stuff online. One of the best books I've ever read on this is Understanding Variation, https://www.amazon.co.uk/Understanding-Variation-Key-Managing-Chaos/dp/0945320531 It's short, oriented to problem solving and process understanding, and eminently practical.
Your job sounds brilliant by the way, but don't get bogged down in learning software packages. People who receive your output need to know what it MEANS. This can often be left out of fancy pretty graphs. Analysis should produce insight, not just the workings out. Get to know the business and be a business person, not just a data person. Data has no meaning stripped of context, and you should steep yourself in context.