Transaction management, in a nutshell, is keeping track (serialized or scheduled) changes made to a database. An overly simplistic example is debiting and crediting $100 and $110 dollars (respectively). If the account balance is currently at $90, the order of this transaction is vital to avoid overdraft fees. Now, concurrency control is used to ensure data integrity when a transaction occurs. Thus making the two events interconnected. Thus, in our example, serializing the transaction (all actions are done consecutively) is key. You want to add the $110 dollars first so you have $200 in the account to then debit $100. To do this you will need a timestamp ordering/serialization. This became a terrible issue back in 2010 and is still an issue in 2014 (Kristof), where a survey of 44 major banks in which, half still re-order the transactions, which can result in draining account balances and causing overdraft fees. The way they get around all of this is usually having processing times for deposits, which are typically longer than the processing times for charges. Thus, even if done correctly serially, the processing time can per transaction vary so significantly that these issues happen. According to Kristof (2014), banks say they do this to process payments in order of priority.
In the case above, it illustrates why this is why an optimistic concurrency control method is not helpful. It is not helpful because they don’t check for serialization when doing the transactions initially (causing high cost on resources). However, transactions in optimistic situations are done locally and validated against serialization before finalizing. Here, if we started at the first of the month and paid a bunch of bills and then realized we were close to $0 so we deposited $110 and continued paying bills to the sum of $100, this can eat up a lot of processing time. Thus it can get quite complicated quite quickly. Conservative concurrency controls have the fewest number of abort and eliminates waste in processing via doing things in a serial nature, but you cannot run things in a parallel manner.
Huge amounts of data coming in like those from the internet of things (where databases need to be flexible and extensive because a projected trillion of different items would be producing data) would benefit greatly from the optimistic concurrency control. Take the example of a Fitbit/Apple watch/Microsoft band. It records data on you throughout the day. However, the massive data is time-stamped and heterogeneous, it doesn’t matter if the data for sleep and walking are processed in parallel, but in the end, it is still validated. This allows for a faster upload time through blue tooth and/or wifi environments. Data can be actively extracted and explained in real-time, but when there are many sensors on the device, the data and sensors all have different forms of reasoning rules and semantic links between data, where existing or deductive links between sources exist (Sun & Jara, 2014) and that is where the true meaning of the generated data lies. Sun & Jara suggests that a solid mathematics basis will help in ensuring correct and efficient data storage system and model.
Resources
- Kristof, K. (2014) Nearly half of banks still “reorder” checks, boosting overdraft fees. Retrieved from http://www.cbsnews.com/news/nearly-half-of-banks-still-reorder-checks-boosting-overdraft-fees/
- Sun, Y., & Jara, A. J. (2014). An extensible and active semantic model of information organizing for the Internet of Things. Journal Personal and Ubiquitous Computing 18(8), 1821-1833. – Doctoral Library ACM Digital Library