The probability in my mind that I am correct in attributing extensive moral
It's a very interesting question. It's about what my colleague Dan Moller calls moral risk. And it's a problem not just for utilitarians. The general problem is this: I might have apparently good arguments for thinking it's okay to act in a certain way. But there may be arguments to the contrary—arguments that, if correct, show that I'd be doing something very wrong if I acted as my arguments suggest. Furthermore, it might be that the moral territory here is complex. Putting all that together, I have a reason to pause. If I simply follow my arguments, I'm taking a moral risk.
Now there may be costs of taking the risks seriously. The costs might be non-moral (say, monetary) or, depending on the case, there may be potential moral costs. There's no easy answer. Moller explores the issue at some length, using the case of abortion to focus the arguments. You might want to have a look at his paper HERE.
A final note: when we get to bacteria, I think the moral risks are low enough to be discounted. I can't even imagine what it would mean for bacteria to have the moral status of people or even of earthworms.