A hard question to ask because we don't know your use case, what kind of models you are using, if privacy is a consideration, all of that. I mean it is one thing to do super low power inference for wake words or something like that on the edge, another to do it on a customer's phone or PC, and another to have a huge model that runs on a cluster.
A hard question to ask because we don't know your use case, what kind of models you are using, if privacy is a consideration, all of that. I mean it is one thing to do super low power inference for wake words or something like that on the edge, another to do it on a customer's phone or PC, and another to have a huge model that runs on a cluster.