Thanks for the question. There are various aspects to scalability, which I'll cover in turn:
- Size of network. There is no scalability limit in terms of node count, because each node doesn't need to connect to every other to create a fully connected peer-to-peer network.
- Transaction throughput. This is limited mainly by the CPU of the weakest server of any node. On mid-range servers we now see (in the latest build) around 1000 tx/second. This will be slowed down if a wallet has a large number of unspent transaction outputs, but there are APIs and runtime parameters to automatically combine those to keep the numbers down.
- Disk usage. The blockchain keeps growing on disk as more transactions as performed. If your transactions are just making simple asset or transfers or publish small pieces of data, you should budget around 300 bytes per transaction + 2K per block of disk space. Extra space is also taken up by each node specifically, to track addresses of interest to that node, and for indexing subscribed assets and streams. But any large piece of data is only stored once on disk.
- Memory usage. This increases mainly based on the number of unspent transaction outputs in the wallet, and there is also around 300 bytes held in memory for each block in the chain. If the node has millions of addresses in its wallet (including read-only) or is subscribed to millions of assets or streams, that also increases memory usage.
- Node catch-up time. New nodes joining the chain have to replay all transactions from the beginning, and so it can take them significant time before they are up-to-date. The exact amount of time will depend on how many blocks and transactions are in the chain.
As for operational cost, you would need to determine that for yourself, based on staff and server requirements (which we hope is informed by the above).