When things is small and predictable everything is easy. Say you have finite amount of user like 1000 users… u code something.. put into your staging and start the stress test. And say figuratively it did not work.. try another approach and put into test again.
Now .. when you try to code for something like .. u dont know the number of user.. 1 ,2,3 .. 100 .. 1000 a million? Things like supernatural stuff like to come up. I just found out that our relay server is act on its own will even we code it without restriction.
Say.. we had 1000 user concurrently running. The socket for device to device stream are un reliable.. so we go into the relay.. things are happy.. user upload.. put the server.. relay.. send to receiving party.. video shown.. happy customer. But..
Say we had 1 mio concurrent user.. socket is reliable.. no need to relay. For some crazy scenario we decided to put firewall restriction to sender receiver.. second scenario kick in.. relay is on. Upload the file, stream it. Failed. Hmm.. check the log.. both parties are connected. Alive, file is there, why it did not stream? Modify the code, by pass every security measure, fail. Shit lah.
Coba restart relay, cost another 20 minutes until dns running, try again.. it is ok. Case closed.. a week later got the same issue again.
Wtf.. wtf.. this is hardware failure!, but no log show it. We need just to believe that. Hahaha. After a month stupid debate.. I just check the spec of our processor.. seems everything is legit.. except there is some low level procedure is actually simulated. Yes.. I read your processor spec sir. When things get rough.. some low level stuff got more priority. Even rough can be defined as 40% usage. Weird.
Change proc fixed the issue. How about when our node use virtual proc? You just need to have some faith son. Have some faith. Hahaha