I've been kind of sick lately. Not a cold or anything significant but my tooth got an infection. Problem was that its under a dental bridge and there was no relief. My face got swollen and it hurt like hell. I was instantly relieved when it burst but I had to have minor surgery to save the bridge. A lot of stitches. Swollen and hurts like hell again plus a minor fever.
What's the point of this? During this period I took several days off and worked from home. It was (and still is) a pretty busy period and deadlines wait for no one. Interesting re-discovered fact: working at home can be an order of magnitude more productive than working at the office. There are no distractions. Simply being able to focus myself, get into the zone and stay there for long periods of time works wonders.
That is, until there comes a time where I needed to communicate with the outside world. There are tasks which can be accomplished by putting yourself inside a virtual bubble and simply ignore external I/O until you recognize that a significant amount of time has passed because your stomach is growling for lunch/dinner. But most tasks just demand some level of interaction with co-workers in order to progress.
And this is where working from home gets a little more difficult. This may sound strange in the era of e-mail, web cameras and teleconferencing so I guess I'm a bit old school. But I notice that some of my crucial interactions with people are not accomplished easily without face-to-face communication. Perhaps I'm doing something wrong or I haven't found the right way to do the job. But there's an additional possibility. Perhaps the bandwidth of unspoken conversations that take place using body stance and body language is significant in a face-to-face encounter.
Sometimes the integrator, sometimes the programmer. Facts and thoughts on payments systems and related technologies.
2011/03/25
2011/03/19
Ownership
One of the most important decisions a bank needs to make about their EFT switch system is probably whether to find a hosting service or acquire a switch and host themselves in-house. There are a lot of factors that come into play when considering this decision, which is by no means an easy one.
Generally speaking, the decision to use a hosting service always translates to relinquishing some control to the hosting provider. Depending upon the size and focus of the bank, this is not necessarily a bad option. The hosting provider can protect the bank from a lot of the complexities involved in running a switch.
Most of the time though, large banks opt for an in-house switch that they control and operate themselves because they perceive (and rightly so) that:
All the above become the bank's responsibility. Departments need to be staffed with resources that have considerable EFT experience in order to take on tasks such as these for the duration of the system's lifetime. Where does the vendor come into play in this scheme of things? It's doubtless very important to choose a vendor with solid support skills, a pro-active approach to business drivers and a general outlook towards the EFT landscape. Ideally the switch vendor will act more as a business partner and consultant than a simple provider of software licenses. A vendor may also provide resources to perform certain technical tasks on behalf of the bank.
But the one thing that a vendor cannot achieve is own the process of setting up and running an in-house switch on behalf of the bank. One might say that this needn't be so: just get resources from the vendor to perform tasks the bank ought to do and be done with it. Well, that could work...but then again that is what hosting providers do. And they do it more efficiently and also a lot cheaper I would dare say. If the intent is to be more flexible and innovative, the bank should have their own people that live and breathe the product they've chosen. Getting immersed within the switch system is the one thing that cannot be outsourced if the switch system is in-sourced.
Generally speaking, the decision to use a hosting service always translates to relinquishing some control to the hosting provider. Depending upon the size and focus of the bank, this is not necessarily a bad option. The hosting provider can protect the bank from a lot of the complexities involved in running a switch.
Most of the time though, large banks opt for an in-house switch that they control and operate themselves because they perceive (and rightly so) that:
- They can have more flexibility in terms of the functionality they offer to their cardholders.
- They can respond to market drivers faster and deliver products quicker.
- They can differentiate themselves from other banks (which is really a manifestation of the first two points).
All the above become the bank's responsibility. Departments need to be staffed with resources that have considerable EFT experience in order to take on tasks such as these for the duration of the system's lifetime. Where does the vendor come into play in this scheme of things? It's doubtless very important to choose a vendor with solid support skills, a pro-active approach to business drivers and a general outlook towards the EFT landscape. Ideally the switch vendor will act more as a business partner and consultant than a simple provider of software licenses. A vendor may also provide resources to perform certain technical tasks on behalf of the bank.
But the one thing that a vendor cannot achieve is own the process of setting up and running an in-house switch on behalf of the bank. One might say that this needn't be so: just get resources from the vendor to perform tasks the bank ought to do and be done with it. Well, that could work...but then again that is what hosting providers do. And they do it more efficiently and also a lot cheaper I would dare say. If the intent is to be more flexible and innovative, the bank should have their own people that live and breathe the product they've chosen. Getting immersed within the switch system is the one thing that cannot be outsourced if the switch system is in-sourced.
2011/03/13
I/O matters
There are no large banks or processors in the part of the world I live in. Most banks see a few million transactions per month processed through their switch and credit card systems. A couple of banks and a large processor may reach some tens of millions transactions per month.
Naturally, a simple fact as "X transactions per month" does not tell the whole story. Other considerations are also important. External factors and timing usually combine to drive transaction-per-second numbers at, sometimes unexpected, heights. Christmas is a very good example; all banks and processors see a sharp increase in EFT traffic during that time of the year.
So we get terms like "15-minute average TPS rate" and "peek TPS rate". These terms indicate that an EFT system can get busy in a very different number of ways which are important. 20 million transactions per month translate to an average of 7,7 TPS. That may not sound as much. But what if half a million of those transactions occur at December 24, between 11:00 and 13:00? For those two hours, that translates to an average load of almost 70 TPS, to say nothing about the minute-peek TPS numbers. Needless to say that you need to plan for at least that 70 TPS situation, with lots of capacity to spare for the peeks.
What's the most important differentiating factor that determines the mileage of an EFT system? That's I/O capacity. This usually comes as a surprise to the uninitiated but it's only a logical conclusion. An EFT system does a lot of its magic through the database. The database is stored on the disk. The disk is slow; in fact it's the slowest subsystem inside modern computers and that's why there's so much investment in caching strategies.
Whenever I'm asked about system upgrade paths, I always point out that simple fact and ask customers to examine their options. Have a single server hosting the EFT system? Split the transaction log and database files to two different disks. Is that not enough? Have a look at those solid state appliances for single-server use. Too isolated and compartmentalized? Get one or more fiber channel cards, move the database to the enterprise storage and allocate a LUN with premium priority to it. These upgrade steps work wonders for the TPS throughput rates. The CPU is usually the last component eligible for upgrades, unless you're consistently having more than 50% usage sustained for a period exceeding several minutes.
Naturally, a simple fact as "X transactions per month" does not tell the whole story. Other considerations are also important. External factors and timing usually combine to drive transaction-per-second numbers at, sometimes unexpected, heights. Christmas is a very good example; all banks and processors see a sharp increase in EFT traffic during that time of the year.
So we get terms like "15-minute average TPS rate" and "peek TPS rate". These terms indicate that an EFT system can get busy in a very different number of ways which are important. 20 million transactions per month translate to an average of 7,7 TPS. That may not sound as much. But what if half a million of those transactions occur at December 24, between 11:00 and 13:00? For those two hours, that translates to an average load of almost 70 TPS, to say nothing about the minute-peek TPS numbers. Needless to say that you need to plan for at least that 70 TPS situation, with lots of capacity to spare for the peeks.
What's the most important differentiating factor that determines the mileage of an EFT system? That's I/O capacity. This usually comes as a surprise to the uninitiated but it's only a logical conclusion. An EFT system does a lot of its magic through the database. The database is stored on the disk. The disk is slow; in fact it's the slowest subsystem inside modern computers and that's why there's so much investment in caching strategies.
Whenever I'm asked about system upgrade paths, I always point out that simple fact and ask customers to examine their options. Have a single server hosting the EFT system? Split the transaction log and database files to two different disks. Is that not enough? Have a look at those solid state appliances for single-server use. Too isolated and compartmentalized? Get one or more fiber channel cards, move the database to the enterprise storage and allocate a LUN with premium priority to it. These upgrade steps work wonders for the TPS throughput rates. The CPU is usually the last component eligible for upgrades, unless you're consistently having more than 50% usage sustained for a period exceeding several minutes.
2011/03/09
"Standard" ISO8583
After many years of dealing with banks, processors and people related to payments I still get to hear it from time to time. It goes like this:
My question: "What's the application protocol used by the X entity?"
The response: "ISO8583".
As if that says it all! It's true that the knowledge than an application protocol is based on ISO8583 does convey some information. I immediately know that there's going to be a jumble of fields whose presence is defined by a bitmap of some sort. I know that I'll have to deal with things like processing code, STAN, RRN, POS code and other related fields that I may have dealt with before in the past. I also know that I have a good parsing facility at my disposal and I can use it to implement the protocol.
But that's about it. Some people indicate that application protocols based on ISO8583 are dialects of a protocol. Although this sometimes appears to be a legitimate analogy, I would say that ISO8583 defines the consonants and vowels of a language. People that speak the same language with different dialects can communicate after a fashion. They'll find some common ground, a verbal subset with common meaning to them both.
Computers speaking different flavors of ISO8583 cannot communicate at all. It only takes one tiny bit of difference to throw everything out the window. ASCII or EBCDIC? How is the bitmap packed? Is that LLVar or LLLVar? Is this field mandatory, optional or conditional and what's the condition defining its presence or absence? What's that private data field, I don't know how to decode it!
Questions like these have to be, not only answered, but also asked. I've seen application protocols based on ISO8583 that are crafted for POS transactions only. Or ATM transactions only. There are ISO8583-based protocols that cater for a multitude of transactions and there are others that handle only balance inquiry and cash advance. Given the structure, capabilities and degree of configuration of your ISO8583 parser, some ISO8583-based application protocols may be implemented in a couple of weeks. Others may need several months. Being based on ISO8583 doesn't tell the whole tale.
Computers speaking different flavors of ISO8583 cannot communicate at all. It only takes one tiny bit of difference to throw everything out the window. ASCII or EBCDIC? How is the bitmap packed? Is that LLVar or LLLVar? Is this field mandatory, optional or conditional and what's the condition defining its presence or absence? What's that private data field, I don't know how to decode it!
Questions like these have to be, not only answered, but also asked. I've seen application protocols based on ISO8583 that are crafted for POS transactions only. Or ATM transactions only. There are ISO8583-based protocols that cater for a multitude of transactions and there are others that handle only balance inquiry and cash advance. Given the structure, capabilities and degree of configuration of your ISO8583 parser, some ISO8583-based application protocols may be implemented in a couple of weeks. Others may need several months. Being based on ISO8583 doesn't tell the whole tale.
2011/03/05
The sun is beginning to set
I never understood the way ACI handled the Base24 sunset. When the policy was first announced I was amazed. The need to move forward is understandable. But why endanger your leading market position in a way that creates opportunities for your competitors and displeases your customers at the same time?
Well, maybe there were other considerations as well. Perhaps ACI needed to focus their resources on Base24-eps and cannot effectively do that by supporting two major and very different products at the same time. Maybe they did their homework and realized that Base24 classic, while undeniably being a great cash cow and the product with the largest market share, is a dying beast which is slowly being displaced by more modern systems. Perhaps they projected into the future and, after seeing that they're losing their leading position one installation at a time, discovered that a radical change in policy was both inevitable and necessary.
Whatever the reasons, it's now official: the sunset of Base24 classic has forced customers to get out there and look for alternatives. They may not be out there not en masse, and they may have totally different outlooks depending on corporate culture, but it's definitely happening.
I've recently gone through the exercise of responding to an RFP for Base24 classic replacement. Three things are very clear.
- The customer is aggravated. The feeling is not exactly oozing out of the pages of the RFP, but it's clear that they didn't like their hand being forced.
- The customer is aware that a lengthy migration is inevitable, they have no delusions about that.
- The existing Base24 classic installation was modified to incredible lengths, which is understandable if one realizes that the system has been in place for well over a decade.
Subscribe to:
Posts (Atom)