Our company has many customers whose hardware we support. I'd like to create a series of anomaly detection programs (ADs), e.g. using TensorFlow, to monitor their accounts for unusual disk activity, spikes in latency, etc.
I'm hoping to use GCP to help solve the problem and host the programs that we design, but first I've been asked to give an estimate of how much it will cost. Management is worried that creating and deploying hundreds of these ADs will take lots of disk space and involve significant ongoing cost.
I am but a humble data scientist, plus relatively new to GCP, and therefore don't feel that I'm in a position to give a definitive answer, even though my intuition tells me that this is what GCP was designed for and should be able to handle it without breaking the bank.
Can anyone more knowledgable in this area either back me up and/or warn me about the logistics of deploying many, many smaller programs via GCP?