2011/12/03

Implementing authorization scripting

We've recently set out at my workplace to perform a daring undertaking: implement authorization scripting for a payment system that does not have a scripting facility. The idea initially surfaced some time ago as a half-joke. "Wouldn't it be something if the switch supported scripting"? It seemed a bit crazy at first.

We debated amongst ourselves for a while about the merits of such a capability. During our time working with the payment system, it became apparent that we could do anything we wanted by using the software development kits. It was also obvious that some solutions we provided were bloated-by-design by static code and would be better implemented by authoring a quick script. Validating the input of a cardholder performing a custom payment at the ATM is a very good example of such a situation. Validations typically examine few transactions and verify the cardholder input by using a custom verification algorithm, such as a variation of a check digit verification algorithm. The typical solution would be to author a custom plug-in and attach it to the payment system; easy and can be delivered in a few days. The scripting solution would require creating a few lines of scripting code; easier and can be delivered in a couple of hours.

There was value in the idea, no matter how preposterous it might seem. It's not, after all, every day that you set out to plug-in something as significant as a scripting facility to a payment system. However, the flexible nature of the system allowed us to be optimistic.

So we rigged a proof-of-concept implementation and tried it out. The results were surprisingly pleasant and everyone started thinking of scripts for everything after working with our pilot. Stress-testing the solution to more than one thousand scripts per second was the final action before we started using the ideas from the pilot to create a full-blown system.

Some time passed and we're now there, with a scripting product finalized. It's surprising how your perspective changes from the time when you're building proof-of-concept, throw-away code to the moment you set out to create a polished product. A gazillion things that we didn't expect needed to be handled in a manner that makes sense to an end-user. Create a new scripting language or use an existing one lots of people are familiar with? Which one? How would the user write and debug scripts? How would the scripting service be notified of configuration changes? What facilities should the scripting service offer to the end-user, on top of those provided by the scripting language itself? What level of control should we allow to scripts? How could scripts be extended to communicate with external data? And external systems?

Once again, the graphical user interface took a disproportionate amount of time when seen through the eyes of a developer/integrator who would be quite happy with configuring everything through a raw XML interface. But the real lesson here was understanding the need for a roadmap in order to reduce the scope to something manageable and release version 1.0 in a reasonable amount of time.

If we took the time to implement everyone's idea, and lots of people had great ideas, we would be lucky to have released 1.0 sometime at the end of the next year. One developer had a great visualization about how the graphical interface could be made to be much more intuitive. Another proposed a radical implementation that would allow anything in the payment system to be scripted. Yet another one indicated that a workflow should be used to push scripts through an author, then through an approving member, then to testing, to QA and then to production. Very often I was finding myself in the awkward position of having to shoot down good ideas that I liked just to keep the scope manageable for an initial implementation. "That's for version 2.0" became a phrase spoken quite a few times.

Having a developer's background, I can understand what a kill joy I turned out to be. Still, I had to be firm about the release date otherwise there wouldn't be a specific release date. And I do like releasing software!

2011/11/05

Writing an ISO8583 SOA Adapter

One recurring theme that keeps coming back over the last few years is how to best integrate payment systems in a SOA environment. The SOA acronym may mean different things to different people but I've found that, more often than not, when developers think of SOA, web services and related technologies are involved.

Web services offer the advantage of providing a common framework of communication between different systems. The concept is powerful enough to bridge the gap between different programs, different development languages and different operating systems.

It should, therefore, not come as a surprise to know that payment systems users increasingly require their payment systems to connect to web services. Some payment systems are better at this than others. But I've found that the majority of payment systems have ISO 8583 so close to their heart that it's not as easy as it should be to use an external web service.


During the last few months, I've decided to tackle this in a generic way to allow payment systems to connect to web services at the backend. The approach is simple: create a SOA adapter that translates ISO 8583 to web service calls in a configurable way. The SOA adapter would connect to the payment system, receive an incoming ISO 8583 message, understand what is being said, call the appropriate web service and return the results to the payment system using an ISO 8583 response message.

This seems easy enough at first glance and it's a task that, in some form of another, most developers have encountered at some point of their career. Communicate with one system using its own language/API and translate what is being said to another system's language/API.

In the context of the SOA adapter, there is a complication. On the one hand you have web services that, more or less, use the same language. But on the other hand you have payment systems that use ISO 8583 in completely different ways that are often incompatible between each other.

One approach to tackle this is to create different flavors of the SOA adapter. This way you could end up with SOA.BaseI, SOA.Biciso, SOA.PostBridge and several other combinations that cater for specific needs. While this could solve the problem, it quickly adds up maintenance costs as it's necessary to keep track of different versions of the software.

The approach I've used is to code an ISO 8583 implementation generic enough to be able to read the application-level protocol from an XML configuration file. In this way, you have a single version of the software which comes with several configuration files so you just pick one that works for you. Plus, you get the benefit that you can implement an ISO 8583 customization by changing an existing XML file or by creating a new one; no code changes.

After completing this development exercise, the result is a clever piece of middleware that can be used to quickly connect payment systems to external web services using configuration only. The notion appears to be good enough to market. An end-user could connect the SOA adapter to their system, edit a few configuration files to describe their web services, their methods and parameters and SOA adapter just picks this info up and does the rest.

It may come as a surprise to some but the most difficult task which I'm still battling with is the creation of a user interface to drive the configuration. Strictly speaking, a config GUI is not required to use SOA adapter, especially when the technical background of potential end-users is considered. But, like my boss says, a GUI is more than just a necessary evil; it's the front end of the system and plays a major role in forming the perception users can have of the whole system.

2011/10/03

And the winner is: ACI

After a short struggle, ACI has managed to acquire S1. In a big turn of events, starting from the S1-Fundtech merger proposal to today's (3/10) press release, ACI has finally achieved their objective.

This acquisition creates a true payments software giant. Not that ACI wasn't the market leader before; but with this acquisition the new ACI will have the vast majority of the market with all other competitors coming a very distant second.

How does this translate for ACI and S1 customers, both existing and prospect ones? No way to tell at this point. It all depends upon the strategy that ACI will follow in the next few months. Will Base24.EPS be favored instead of S1's Postilion? Or will ACI keep both offerings...for now? Despite the original ACI position, Base24.EPS and Postilion have much more in common than not and it might not make sense to keep both products alive.

This situation certainly is familiar to people that remember what happened to Open/2. Postilion, though, had more traction than S2's Open/2 and ON/2 combination and there are a lot more customers that have chosen Postilion during the last two or three years. I would expect that there are a lot of nervous customers around, wondering how this situation will play out.

2011/09/22

ACI vs S1: now what?


It looks like ACI has been at least partly successful with their offer to acquire S1. Partly because the proposed merger between S1 and Fundtech is now history and ACI has averted the rise of an S1-Fundtech combination which would be an even bigger competitor than S1 is on their own right.

So what happens now? By the turn of events it appears that after the ACI offer turned hostile, both ACI and S1 are building up their S1 stock to get control of the company. The S1 board is fiercely fighting against this but, despite initial impressions, it's not going to be easy for S1 to maintain autonomy.

When the S1 stock jumped up a few notches, some sold and some bought. At the end, it appeared that it's not entirely clear if the S1 CEO can rally the shareholders to fight off the attempted ACI takeover. And with the shareholder meeting for the Fundtech merger cancelled, it remains to be seen who has the most votes.

The current situation is not exactly the best S1 hoped for. There is uncertainty amongst both existing and prospect customers, as well as resellers of S1 products. Will the S1 payments platform still be around a year from today? That is the important question here. Although not a betting man, I would be willing to bet that the current situation is hurting S1 financially. Prospect customers, especially those affected by the ACI sunset, would not feel comfortable by selecting S1 now and then find out that they'll be dealing with ACI again after a few months.

With things being what they are, a shadow of doubt is cast upon the S1 future. Even if ACI does nothing at the moment, no one can be sure that they're not just accumulating S1 stock and trying to convince the shareholders to back them up so they can force a change of the board members or reach an agreement with the board behind the scenes. 

In that sense ACI has gained an advantage over S1, at least for now. S1 has to clear this mess as quickly as they can, especially during the year of the setting sun. Unless of course, the game ends with ACI managing to acquire their most serious competitor.

2011/08/31

Opportunities of large-scale migrations

Time marches on. Technologies change and systems are upgraded. Every once in a long while, banks have a good hard look at their payment infrastructure and decide that it's time for a change. This may happen every few years or every twenty years, depending on the bank's culture. It's that time of the decade. It's migration time.

An RFI is circulated to several companies. Then an RFP is sent to the select few that seem capable of doing the job of migrating the bank from the, now perceived, old dinosaur to their platform advertised as supporting the latest-and-greatest of technical goodies and being able to drive customer innovation to new unheard-of heights. Committees form to evaluate the responses to the RFP. Vendors drool over the possibility of a new customer, especially if the customer currently uses the arch-enemy's product. Discounts rise to unbelievable heights. The bank negotiates the hell out of the deal.

The bank typically pays attention to the following items:
  • Budget. These days, this is one of the most important factors in shaping a decision.
  • Vendor capability to execute. Regardless of the value of each product, great attention is being paid at the ability of the vendor to present and execute a good migration plan that minimizes the risks as much as possible.
  • Vendor product, customer references, vision and fit. All vendors advertise their product as being the best on the market. But is the product a good fit with the bank? Does the vendor have a product vision? Are there enough positive customer references?
After some time a decision is being made and a migration project begins. Depending on the technological and architectural gap between the system being replaced and the system replacing it, the migration can range from tricky to painful. Some areas are more problematic than others, requiring special handling and attention.

It's a time of change. And a great opportunity for the bank to look hard at some of the existing processes, procedures and infrastructure. I almost always advise against introducing unnecessary changes during a large scale project. Most of the time, changes introduce some level of risk and nobody likes that. However, a radical case such as migrating from a payment system to another offers some opportunities that are simply too great to pass up.

Perhaps the single, most important thing a bank can do at this point is re-evaluate some of the things they do and, most importantly, examine why are they doing them in the way they do. It can be pretty amazing to discover that some things just happen in a certain way because someone was used in doing them that way two systems ago. Or that some manual and/or tedious process can be automated or simply removed because of the architecture of the new system. That's the reason business solution specification cycles tend to focus on the "why" instead of the "how". The question "why is this process in place" is always very important.
 

Other, more technical, matters can be closely examined during a migration. It is always advisable to try and play by the rules of the new system and not against them. This might necessitate changing a few existing interfaces or alter some existing process to better fit the new system. A good example that comes to mind is the way the host system communicates with the payment system. More often than not, the host uses an application protocol that does not resonate well with the new system designers (especially true for stream protocols with new systems that favor ISO-style protocols). In my experience, re-programming the host to make it understand a modern protocol is always a dreaded exercise. But it's worth it in the long run. Having the new system implement a custom protocol is certainly doable but the repercussions of such a decision will be felt for years - possibly for the duration of the lifetime of the  new system. Choosing the modern application-level protocol of choice of the new system that is seamlessly and continuously upgraded to cater for new needs may require an expenditure of a few hundred man-hours of host programmers. But in the years to come no one will have to worry again about protocol translation, mapping and all the related headaches of such maintenance exercises.

Other systems, in production or planned, could also be examined and amended to be as compatible with the new system as possible. As a rule of thumb, new systems tend to offer way more integration options than older systems. It might be a good idea to try and take advantage of this resiliency and avoid ugly hacks that were necessary in order to play with the old system but are now obsolete. Example: channel integration. Older systems usually treated channels very differently according to their capabilities. The result is that newer channels have a hard time communicating with the payment system, making it necessary to resort to weird workarounds. Internet and MOTO channels might use a standard POS interface to send transactions because that was the only way to do it. But if the new system offers a simpler and standardized way to achieve the same result, it's better to bite the bullet now and do the change to use that interface than to suffer from the shortcomings of the old interface forever.

I'm obviously not advocating a complete re-engineering. The norm should be the payment system bending over to cater for the needs of a bank and not backwards. But this rule should not be always be blindly applied to everything. There will be situations where change, especially in existing processes that are not transparent to the end users, could be beneficial to the bank.

2011/07/31

ACI bid for S1

News of the ACI bid for S1 caught up with my summer vacations pretty quickly. Just a few weeks after the announcement of the S1-Fundtech merger, ACI stepped in and made an attractive bid for S1. According to ACI, its offer represents a more attractive option to S1 shareholders than the S1-Fundtech merge.

Some people would remember that this is not the first time ACI made a bid to acquire a competitor. Back in 2002, if memory serves, ACI acquired S2 which was the maker of Open/2 and ON/2. During those days, S2 held a reasonable size of the market (although much less than ACI). The result of the acquisition was that ACI in effect placed both Open/2 and ON/2 in sunset status. The bid was simply a move to take out a competitor, eliminate the competing products and acquaint ACI with the S2 customer base.

Back then, it wasn't a secret that S2 was not in the best of financial shapes. This is not the case with today's S1, even without considering the financial size of an S1-Fundtech combination.

Perhaps none have more interest in the ACI bid than current and prospect Postilion users. And it is understandable. Users wonder what will the future of the Postilion product be. Would ACI want to capitalize on the platform and continue to develop it? Or would they favor their existing EPS product and just kill Postilion?

It's impossible to tell at this point. The ACI announcement speaks of "complementary products" but that's really not the case. Postilion and EPS have much more in common than not, in terms of functionality offered to end users. Under that light, why would ACI want to keep two pretty different platforms that provide the same offering to their customers? Wasn't one of the reasons to sunset Base24 classic based on the principle of focusing business development on one platform only?

Some might suggest that ACI plan to use part of the technology behind Postilion and incorporate it in their EPS product. If that's so, the logical plan would be to move some of the Postilion tech leads to the EPS team and use their skills there. But in this case, Postilion would be gradually abandoned with no business improvements scheduled and with minimal support for card association updates only. ACI would issue a sunset statement for Postilion in a future date, while figuring out a way to offer a realistic migration plan to EPS to current Postilion users.

Perhaps ACI might even consider a wait-and-see tactic and see how their sales force does with Postilion. Maybe there's a plan to go with Postilion on small and mid-level bids and go with EPS & IBM on large-level bids; not that it makes a lot of sense from a technological point of view.

Considering the above, a buy-to-eliminate plan would seem to make sense. But then again, as a friend of mine pointed out, if that's the case we'd be talking about a very expensive move. And that's true. Spending more than half a billion just to kill off a competitor does seem a bit over the edge and won't get many big fans from the ACI shareholders.

There is still no reaction from S1, other than a statement that the bid will be reviewed by the S1 board. I guess we'll just have to wait for the S1 response to see how this whole affair will play out.

UPDATE: S1 has quickly rejected the ACI bid saying that it was not in the best financial and strategic interest of their shareholders. To translate, S1 is saying that they have complete faith in their own long term business plan. And perhaps rightly so. Postilion has evolved from a regional switch owned by a relatively small company to a global player praised by Gartner, sold and operated worldwide. Considering events of past years, I suspect that the S1 CEO was the driving force behind this decision.

This puts ACI in a bad position. In competitive comparisons, ACI is now going to have a hard time explaining to prospects how their own EPS offering is better than Postilion. Likewise, the S1 sales force will surely make certain that prospects know about this series of events and the failed ACI attempt to buy them.

UPDATE: ACI fights back the S1 board decision and wants to get direct support from S1 shareholders. In my very humble opinion, I very much doubt that the S1 board did not contact the major S1 shareholders before rejecting the ACI proposal. Regardless of what's happening behind the scenes, it's becoming painfully obvious that ACI is not happy with the S1-FundTech deal and will go at any lengths to prevent it.

UPDATE: Well, this is getting more and more exciting. After ACI's attempt to solicit proxies against the S1-FundTech merger,  S1 issued a comprehensive response to shareholders. Soon thereafter, ACI increased the cash part of the bid to acquire S1. I think that ACI is trying to persuade investors not in for the long run to go for the cash. The ACI press releases keep on singing this tune. In these uncertain times, some investors might very well be tempted. The S1 press release recommending to vote for the S1-FundTech merger appears to offer more concrete arguments to shareholders.

2011/06/17

The twist in DCC


There are some countries, such as Greece, with intense tourist traffic from all over the world. In cases like these, banks and acquirers that have terminals located at major tourist attractions see a sharp increase in transaction volumes that starts at May, lasts throughout the summer and starts to decline in October. It's no wonder that transaction numbers go through the roof mostly because of acquiring traffic that increases.

If an acquirer or bank find themselves having a lot of well-placed terminals in such a scenario, then an attractive solution that's worth looking at is Dynamic Currency Conversion or DCC at the ATM and the POS. When a cardholder that comes to Greece from outside the Euro zone tries to perform a cash advance at the ATM, the acquirer gives out Euro notes or charges for goods and services in Euros. In order to do that, the acquirer first requests authorization from a card association which in turn routes the transaction to the issuer.

During this process, the card association or the issuer have to do a currency exchange in order to buy Euros. When this happens, a small percentage is charged by whoever is doing the currency exchange and is added to the settlement amount that is debited to the cardholder account. Sometimes this is done by the card issuer, who directly charges the account of their cardholder with this FX charge. Most commonly, this is provided as a service by the card associations to the issuers but when this happens issuers can (and frequently do) request from the card associations an additional percentage to be charged. In essence, the cardholder pays for the FX charge which is pocketed by the card associations and/or the issuers.

The acquirer typically does not see any part of this action. Unless the acquirer performs DCC at the ATM or the POS. When this happens, the cardholder is given a DCC offer by the acquirer at the transaction terminal: get billed in Euros (the transaction currency) or get billed in your own currency at this rate. If the customer selects the DCC offer the acquirer gets to specify the exchange rate, keeps the FX commission and sends the transaction to the card association with the transaction currency, the settlement currency and the exchange rate used.

Why would a cardholder choose DCC? Most of the time, the answer is simply because the cardholder is familiar with his own currency but not the local currency. A Briton understands 100 British pounds but needs to think about what 100 Euros mean. Some cardholders compare the exchange rate of the DCC offer to that given by local exchange rate kiosks and choose the DCC offer because they perceive that it is attractive. Whatever the reason, the business case is huge for the acquirers as they're taking a piece of the pie out of the card associations and the issuers. There are some hidden pitfalls here and there as DCC is a pretty complicated process especially with clearing, settlement and disputes. But still, the business case is very real.

There's a small little catch. It's easy for the acquirer and the merchant to get greedy. Too greedy for their own good. Cardholders forget their typical transactions after a few days but they remember their bad experiences for a very long time. And they let their friends know. That kind of negative viral marketing can go a long way towards hurting the acquirers and the merchants that don't play nice.

How does greed come into play in this business? By understating the simple fact that more DCC uptake means more income. When offering DCC, merchants are supposed to clearly present this as a choice and not as mandatory to the cardholder. Merchants may not do this simply to get the added income; this is particularly true for specific merchants such as hotels. Acquirers, on the other hand, may elect to elevate their exchange rate commissions at specific times and specific places. For example, it's easy to think that for transactions originating from bar or other type of entertainment facilities at 2:00 after midnight the acquirer can get away with doubling their exchange rate - the cardholder probably has other things in mind to notice the inconspicuous exchange rate offered at the terminal.

Eventually, this doesn't go unnoticed. The DCC process is already associated with the word "scam" by some cardholders, simple because they were raped once (or, alas, more than once) in Italy, Spain or Ireland to name some of the European countries where DCC is in widespread use. Eventually, cardholders don't even look at the exchange rate offered and they prefer to deal with the devil they know (Visa, MasterCard and their own bank) than get charged through the nose. Acquirers looking to implement DCC should pace themselves and realize that, if configured wisely and respectfully towards the cardholders, it can provide a respectable source of income.

2011/05/25

Lessons learned during a DR implementation


During the last few weeks I've been engaged in the implementation of a disaster recovery solution for a payment system. The task is nearing its completion; usually that's the period where you get the earth move away under your feet due to some unforeseen factor or a detail somehow missed. Thankfully that doesn't appear to be the case in this particular project.

It's always a nice feeling to get such an important and complicated project under your belt. During the course of this project I haven't really picked up any particular technology skills worth mentioning (except being able to break my personal record of the time needed to install the payment system from scratch). However, I gained some experience on practical DR planning and learned a few important lessons. Here are the most significant ones.
  • Closely examine dependencies. In the case of a payment system, it's very difficult to have it operate solely on its own. The payment system usually interfaces with a lot of other entities. These can be the bank's host, external card management systems, Visa and MasterCard facilities, help desk system, middleware servers and others. Some of these are more important than others but it should be defined early on what is an acceptable disaster recovery setup. For example, it doesn't make much sense to have disaster recovery planning for the payment system but not the bank's host. These external systems must be a part of the disaster recovery planning as well.
  • Make sure that there are well defined, written procedures for switching to the DR site. This appears deceptively trivial, right? Well, it's not as easy as it sounds. It really depends upon the organizational skills of each institution, but it can be a tricky exercise.
  • Don't be afraid to break the norm and use the infrastructure available. In my case, keeping the disaster recovery in sync with the production site is officially achieved using replication software. However, the bank already had a SAN in place and that was already replicated to a disaster recovery site. By not following the standard, we managed to leverage what appears to be a vastly superior solution in the DR implementation.
  • Test every possible scenario you can think off. By far the easiest DR scenario is one when everything goes smoothly and you can perform an orderly shutdown of all servers, then switch to the DR site. But what about a power outage in mid-transaction? What about a software failure severe enough to render the production site unworkable? Would the plan to switch to DR work under such circumstances? Test it and find out.
  • Test exceptional scenarios under load. Everything runs smoothly in the test environment where you have a load of exactly 0.01 transactions per second. How about loading up the test system with 40 TPS and see how a switch to the DR site goes? That's what's going to happen in a real-world scenario.
  • Bundle as few infrastructure upgrades as possible in a DR project. It's tempting to say that since we'll test everything, how about also upgrading our database server and also install those 25 patches to the payment system as well? Well, that's a thought and it can save you some time...if everything goes smoothly. But the last thing you want to do is end up chasing down problems due to upgrades while you should be testing the DR plan.

2011/05/08

Documentation

I frequent the CodePlex site quite often. As a developer I try to get a glimpse of the efforts of other developers for various reasons. Some of the projects found in CodePlex are a one-year scheme, possibly a junior developer's attempt at fame but without serious motivation. Most of the projects tend to revolve around topics that appeal to the younger developers, especially projects revolving around ASP.Net and MVC, HTML, content management and other related popular technologies.

Occasionally, I find an open source project that stands out from the others. Some time ago I got interested in CommonLibrary.Net. This little project is a library of reusable code but it is different in several ways from other similar endeavors. A very encouraging point is that the project started out in 2009 and the main author is very active, committing regular check-ins and continuously evolving the library and posting releases with new features and fixes. Another feature that grasps the attention of the visitor is the large namespace and the multitude of developer helper classes that were fit into this project. Finally visitors browsing the code can quickly see that it's of unusually high quality, with the main author clearly knowing the tricks of the trade.

There is one sore point, and CommonLibrary.Net is by no means the only project at CodePlex that is problematic in this area. The quality of documentation is poor at best. There are examples of using the library and one project deliverable is a help file generated by the code XML comments. But it's not enough. The examples focus on showing how to use specific namespaces of the library but do not explain the general idea behind them. Browsing the code one can quickly see that there are several problems with the XML comments from which the help file is generated: parameters are not documented, comments are copy-pasted between overloads and are incorrect, class and method comments are frequently vague and a lot of protected or public members remain undocumented.

I think that there are several cases where an open source project can get away with little or no documentation at all. But a developer's library project simply isn't one of them. I would go as far as saying that the documentation may very well be the most important deliverable of such a project. Users of code libraries are themselves developers but that doesn't mean that they want, by default, to see the source code to understand what's being done under the hood. And the notion that a well laid out namespace and use of naming conventions are their own documentation is correct but way overblown if applied to the max at the expense of code comments. True, one can easily gather that ComLib.Scheduling.Scheduler has something to do with scheduling tasks but how are the tasks scheduled exactly? What is a task? What triggers a task? Is there something a task should not do? How can callers know the status of their tasks? Can they stop them, reschedule them, pause them? Can they add a task at dynamically at runtime? At start-up time? How can they gracefully stop all tasks when the program is shutting down?

Fortunately work is being done at CommonLibrary.Net to slowly fix that. It appears it might take a while but it's being done. Documentation for code libraries is indispensable.

2011/04/30

Change management

When developers hear of "version control" they tend to think of CVS, SVN, Git or other relevant source code version control systems. The idea behind version control is to have a repository of the work done so far accompanied by the corresponding metadata (who changed a source code file, what was changed, was is the comment of the developer that committed a change, when was the change committed, etc). Version control is such a basic and essential facility that no developer or programming hobbyist who take their work seriously can live without it.

From the perspective of the end-user, the corresponding process to manage the life cycle and operation of a payment system is change management. It has a somewhat different meaning and it refers to the process used to track changes done to the payment system. Change management encompasses the complete range of activities that have to do with the configuration of a payment system. These may include installation of new modules, installation of patches, database or file configuration changes, operating system patches or configuration changes, the method each change is applied and other related information.

The main deliverable of change management is a detailed audit trail of installable modules, configuration changes and people that authorized and performed an installation or change to the payment system. This has the following benefits:
  • A change management process ensures proper change authorization.
  • It enforces different roles to people requesting a change and those who actually apply the change.
  • A change management process can identify change prerequisites or areas affected by a change and it becomes easier to properly manage any related issues.
  • Change reporting is feasible in a very structured fashion. It is easy to get a snapshot of the latest installed modules, patches or configuration. Likewise, it's also easy to find out the path of installs and changes that resulted in the current system. And it's also trivial to attribute changes to the business entity or person that authorized them. 
  • Once in place, a properly used change management system forces you to formally evaluate each change and assess the related implications. Likewise, it ensures that a change is applied by the authorized people in the correct manner. In short, it forces you to do things the right way.
It appears that for some organizations change management is a difficult to grasp notion, despite its easily defined purpose. After all, what's it good for? All organizations have some kind of process that needs to be followed in order to authorize and make changes to a payment system, so what would change management add to this process? 

Well, sooner or later (most probably sooner) one of the following questions will pop up:
  • "I want to create a new QA system, what do I need to install and what do I need to configure?"
  • "Who authorized taking that patch live?"
  • "I want to create an image of the production server for DR purposes. What's installed in production?"
  • "Why did we make that change in the database?"
  • "How is the system configuration different to the default configuration?"
  • "What are the prerequisites before installing this patch in production?"
Without change management, answering each of the above is a time consuming and error-prone task. Change management is therefore very important. It doesn't matter how is it implemented. An organization may elect to create a home-grown change management system, purchase a license for an existing system or go out to a web-based or SaaS-based offering. The important thing is to have the process available and functioning.

2011/04/21

Tokenization

For several reasons, the acquirers and banks in the part of the world that I live in have not seriously gone after the web merchants to make them become PCI compliant. I guess that PCI is one of those topics that everyone hates. I believe that for merchants in particular, the subject must really be viewed as a major nuisance.

The word is out, though, that Visa is getting stricter and that there's going to be a fury of activity. Well, it was about time. I'm not a supporter of PCI just for the sake of PCI. But if you think about it, Visa is doing the merchants and the acquirers a favor. Security holes and procedural gaps that could potentially hurt the merchant and the acquirer will be assessed and addressed and that's a good thing. A security breach can be serious enough to close you down. Even if you put monetary losses aside, a breach can generate enough negative publicity to put you out of business.

One way to minimize the scope of PCI is to introduce tokenization. Web merchants that use this technology can get to a point where no sensitive data ever gets into their systems, regardless of what those systems are. Only a card token and possibly a transaction token are stored. This information, if stolen or intercepted, is useless to a data thief and cannot be used to send fraudulent transactions from some other part of the world.

There obviously need to be changes in the authorization flow if tokenization is to be introduced. For the acquirer the exercise is not exactly straightforward. Tokenization is not the typical service that acquirers provide to their merchants, at least not at this point in time. For banks that provide acquiring services to web merchants (and hence have smaller numbers of e-commerce transactions), the economics of the business case are even trickier. This is the reason why there are service providers with offerings that are centered mostly around tokenization and data security.

As payment systems continue to evolve, they will doubtlessly include tokenization as part of the standard transaction flow and the window of opportunity for tokenization service providers will close. Securing the authorization process with card and transaction token will be the first step. Handling recurring payments is somewhat trickier and could be a batch-based process but eventually payment systems will provide a solution for that business need as part of their standard out-of-the-box packaging as well.

2011/04/14

Ownership continued

Some time after blogging about "Ownership" it so happened that I got into a meeting with a company that provides payment consulting services. The people of that consultancy company seemed nice and they certainly have been around the block a few times. After discussing a migration project, I was left with the impression that they knew what they were talking about.

As the conversation unfolded, we started broaching the topic of testing and certification. I quickly discovered that the consultants had a very different view of how testing and certification should be handled. I am of the opinion that when an end-user (a bank) wants to in-source their payment system, they should take ownership of all testing and certification activities and immerse themselves in these tasks.

To my initial surprise, I discovered that the consultants favored the extreme opposite approach. They pretty much wanted to come up with the test cases, execute the tests and be in charge of certification cycles. In short, they felt that it is preferable to them to get all over these tasks themselves than help the end-user do them.

"How can the end-user ever get off the ground if they do not own these processes?", I thought to myself. "How can the end-user even operate their own payment system and innovate with it if they can't even go through a test cycle themselves?".I presented these concerns to them. Their response was along the lines "We can provide this service to the end-user".

Some times I'm pretty slow but at this point I grasped the obvious difference between us. The consultants are trained to act on behalf of the end-user. They are hired by the end-user and most of the time adopt the viewpoint of the end-user. When they are successful in a project, they think and act as the end-user and in essence they are the end-user. On the other end of the spectrum lies the vendor that provides the payment system software - the vendor would be very comfortable just selling a license. I happen to sit comfortably in the middle. I work with payment system vendors and, when I'm successful, I help the end-users to integrate the payment system in their organization.

In the end, the consultants virtually re-validated my opinion: for an in-sourced payment system, the end-user must step up and take ownership of the system. Whether this is achieved by the end-user's own resources or by getting hired guns to do the job is a different matter. I'm not by default against getting consultants to run an operation and take on all the things that an end-user might find distasteful. Sometimes the conditions and the economics may be right to do just that, perhaps for a large retailer, a specialized processor or a global bank. But in the long run it just doesn't make sense for the average case. If continuous integration, patching, testing and certification activities appear to be more than an end-user can bear, there's a perfect alternative which is out-sourcing.

2011/04/08

Complexity

Looking back at the 90s, the tools of the trade of programmers implementing payment solutions were horrific by today's standards. A great deal of work was done using Telnet and I remember that I quickly learned to hate the blue, text-only interface of the TTWin terminal emulator. Everything was slow. It wasn't a problem with the terminal emulator or with the text-based environment in itself, just a lack of programming accelerators and facilities. Whenever a project came along that allowed development outside of the proprietary box running the EFT switch, the contrast was evident.

Auxiliary tools were also non-existent or downright primitive. Message protocol simulators were basic and moody. Terminal simulators were starting to emerge but were costly and sometimes hard to use. Writing and executing tests was done with ad-hoc tools so esoteric that it was often too tempting to just do the whole thing manually. Traces were hard to read. Network protocols like SNA and X.25 required considerable expertise to get a hold of. And if you had to remotely connect to another system to do something, you had to go through a modem - it often turned out that the bandwidth of myself getting up and physically commuting to the remote system was considerably larger than that of the modem.

Well, nowadays the tools have been considerably upgraded. We're now coding in Eclipse or Studio, two very powerful IDEs with great facilities. Workstations have become very powerful. Dual monitor setups are becoming the norm, providing valuable additional screen real-estate. Simulators are sometimes build-in to the payment products and, if they aren't, they're easy to build or free to download or inexpensive to buy. Test facilities and test harness suites are abundant, easy to work with and sometimes ever free of charge. TCP/IP has dominated and has become the network protocol of choice; the implementation of other network protocols is usually left to dedicated network devices. Remote connections can be established in a secure and fast manner.

The weird thing though is this: while the state of the facilities and tools available has improved by an order of magnitude, productivity hasn't improved as fast. Back in the 90s the rule of thumb for estimating the implementation of an extensive stream message protocol was six months or more. Now it's three months or more. Way faster than before - after all, being able to do a job in half the time is a big improvement. But it's not an order of magnitude improvement.

Implementation of small-sized projects has benefited greatly from software evolution and maturity of tools. I have repeatedly been able to complete in a week projects that could easily take months in the bad old days. But the speed improvements diminish rapidly with the expanding size of projects. We're still faster but not by the same degree. And there's a very simple explanation for it. Complexity has increased. A gazillion new factors and acronyms have been introduced in our daily development cycle since the 90s. BCP, CNP, EMV, PCI, ISO 20022, NFC, CAP are some of the things that we have to keep in our heads.

This is by no means unique to the payments world. Software in general has become much more complicated. Attempts to simplify things and hide all this complexity behind frameworks, SDKs and APIs are partially successful but so far frameworks are not always able to help us keep up with increased complexity. Perhaps the abstractions provided by the frameworks will become better over time. Perhaps frameworks, like software, are themselves becoming too complex and contribute to the problem in their own way. The one thing that's clear is that human factor is becoming more important in our line of work. In the bad old days, a good programmer without payment systems experience could stand on his own two feet after a few months. This is not the case anymore. The individual has to capture and process a massive amount of information before beginning to see the light. Getting, and keeping, good people is now more important than ever.

2011/04/01

Authorization scripting

Modern payment systems usually offer a multitude of configuration options that cater for a large variety of situations. The thing, though, with such configuration parameters is that they are meant to cover the different situations that are thought up by the system designers. The extent of configuration parameters is limited by the architectural fantasies of the system designers. When users of payment systems delve outside these limits (something that tends to happen quite often) there comes a moment when configuration just doesn't cut it.

When configuration can't do the job, the traditional way of implementing changes in a payment system is through adding code. This could manifest in many forms depending on the system architecture. An external message protocol may be coded as a plug-in. A component may allow user exits to be written. If code for core components is available, end users may hack away at it and change the system behavior to implement their requirements (a practice that they universally regret in the years to come).

At the business spectrum, end users don't like code changes for several reasons.
  • They usually take a disproportionate amount of time to implement and thoroughly test.
  • They carry a financial cost.
  • Custom code tends to cater for uncommon situations and is generally not well covered by patches and upgrades.
  • When piled up, custom code may erode the system in a kind of spaghetti state.
A not-so-new development that claims to minimize the need for custom code is scripting. What a scripting facility in a payment system offers is an abstraction of a lot of the intricacies of application languages, making it possible for non-programmers to quickly implement a script. In addition, scripts are generally interpreted in nature and thus more dynamic.

To the user, scripting seems like a panacea but is it? It's true that a scripting facility offers some advantages.
  • The interpreted nature of scripts makes it easy to dynamically inject them in the authorization process without downtime.
  • Some customizations are very easy to implement with a script.
  • Implementation of a script generally requires less time and resources than a code change.
  • Scripts can be much more easily implemented by end users than code changes can.
However, not everything is as it seems.
  • By default, execution of scripts is an order of magnitude slower than execution of code.
  • Just as with configuration, the scripting model places restrictions to the customizations that are possible with a scripting language.
  • Making it easier to implement changes doesn't necessarily translate to a more manageable situation. Instead of code spaghetti you could have a script spaghetti of the same complexity, especially with script-happy users.
  • The burden of script maintenance and proper alignment with system patches and upgrades is generally on the end user.
A proper scripting facility is obviously a nice addition to a payment system. It's better to have it that not to have it. But it should also be appropriately rated by end users. No scripting facility, regardless of how useful it may be, can make up for shortcomings of a system in other critical areas. And, regardless of the versatility of the scripting facility, there will always come a time when implementation of custom code will be inevitable. In that respect, a scripting facility is useful but a software development kit is what's really invaluable.

2011/03/25

Communication

I've been kind of sick lately. Not a cold or anything significant but my tooth got an infection. Problem was that its under a dental bridge and there was no relief. My face got swollen and it hurt like hell. I was instantly relieved when it burst but I had to have minor surgery to save the bridge. A lot of stitches. Swollen and hurts like hell again plus a minor fever.

What's the point of this? During this period I took several days off and worked from home. It was (and still is) a pretty busy period and deadlines wait for no one. Interesting re-discovered fact: working at home can be an order of magnitude more productive than working at the office. There are no distractions. Simply being able to focus myself, get into the zone and stay there for long periods of time works wonders.

That is, until there comes a time where I needed to communicate with the outside world. There are tasks which can be accomplished by putting yourself inside a virtual bubble and simply ignore external I/O until you recognize that a significant amount of time has passed because your stomach is growling for lunch/dinner. But most tasks just demand some level of interaction with co-workers in order to progress.


And this is where working from home gets a little more difficult. This may sound strange in the era of e-mail, web cameras and teleconferencing so I guess I'm a bit old school. But I notice that some of my crucial interactions with people are not accomplished easily without face-to-face communication. Perhaps I'm doing something wrong or I haven't found the right way to do the job. But there's an additional possibility. Perhaps the bandwidth of unspoken conversations that take place using body stance and body language is significant in a face-to-face encounter.

2011/03/19

Ownership

One of the most important decisions a bank needs to make about their EFT switch system is probably whether to find a hosting service or acquire a switch and host themselves in-house. There are a lot of factors that come into play when considering this decision, which is by no means an easy one.

Generally speaking, the decision to use a hosting service always translates to relinquishing some control to the hosting provider. Depending upon the size and focus of the bank, this is not necessarily a bad option. The hosting provider can protect the bank from a lot of the complexities involved in running a switch.

Most of the time though, large banks opt for an in-house switch that they control and operate themselves because they perceive (and rightly so) that:
  1. They can have more flexibility in terms of the functionality they offer to their cardholders.
  2. They can respond to market drivers faster and deliver products quicker.
  3. They can differentiate themselves from other banks (which is really a manifestation of the first two points).
The choice to in-source the EFT switch does have a price. The bank has to take ownership of the system. This translates differently to different people but the core point is this: the switch becomes one of the bank's internal, most important systems. All the processes that take place before the system goes live and during its lifetime must be internally owned and driven by the bank. Certification with global and regional networks? Implementation of new functionality and products? Testing of patches and upgrades? Regression testing? Unit testing? Integration testing? Trend monitoring and alerting? Pro-active monitoring of fraud indicators? 

All the above become the bank's responsibility. Departments need to be staffed with resources that have considerable EFT experience in order to take on tasks such as these for the duration of the system's lifetime. Where does the vendor come into play in this scheme of things? It's doubtless very important to choose a vendor with solid support skills, a pro-active approach to business drivers and a general outlook towards the EFT landscape. Ideally the switch vendor will act more as a business partner and consultant than a simple provider of software licenses. A vendor may also provide resources to perform certain technical tasks on behalf of the bank.

But the one thing that a vendor cannot achieve is own the process of setting up and running an in-house switch on behalf of the bank. One might say that this needn't be so: just get resources from the vendor to perform tasks the bank ought to do and be done with it. Well, that could work...but then again that is what hosting providers do. And they do it more efficiently and also a lot cheaper I would dare say. If the intent is to be more flexible and innovative, the bank should have their own people that live and breathe the product they've chosen. Getting immersed within the switch system is the one thing that cannot be outsourced if the switch system is in-sourced.

2011/03/13

I/O matters

There are no large banks or processors in the part of the world I live in. Most banks see a few million transactions per month processed through their switch and credit card systems. A couple of banks and a large processor may reach some tens of millions transactions per month.

Naturally, a simple fact as "X transactions per month" does not tell the whole story. Other considerations are also important. External factors and timing usually combine to drive transaction-per-second numbers at, sometimes unexpected, heights. Christmas is a very good example; all banks and processors see a sharp increase in EFT traffic during that time of the year.

So we get terms like "15-minute average TPS rate" and "peek TPS rate". These terms indicate that an EFT system can get busy in a very different number of ways which are important. 20 million transactions per month translate to an average of 7,7 TPS. That may not sound as much. But what if half a million of those transactions occur at December 24, between 11:00 and 13:00? For those two hours, that translates to an average load of almost 70 TPS, to say nothing about the minute-peek TPS numbers. Needless to say that you need to plan for at least that 70 TPS situation, with lots of capacity to spare for the peeks.

What's the most important differentiating factor that determines the mileage of an EFT system? That's I/O capacity. This usually comes as a surprise to the uninitiated but it's only a logical conclusion. An EFT system does a lot of its magic through the database. The database is stored on the disk. The disk is slow; in fact it's the slowest subsystem inside modern computers and that's why there's so much investment in caching strategies.

Whenever I'm asked about system upgrade paths, I always point out that simple fact and ask customers to examine their options. Have a single server hosting the EFT system? Split the transaction log and database files to two different disks. Is that not enough? Have a look at those solid state appliances for single-server use. Too isolated and compartmentalized? Get one or more fiber channel cards, move the database to the enterprise storage and allocate a LUN with premium priority to it. These upgrade steps work wonders for the TPS throughput rates. The CPU is usually the last component eligible for upgrades, unless you're consistently having more than 50% usage sustained for a period exceeding several minutes.

2011/03/09

"Standard" ISO8583

After many years of dealing with banks, processors and people related to payments I still get to hear it from time to time. It goes like this:

My question: "What's the application protocol used by the X entity?"
The response: "ISO8583".

As if that says it all! It's true that the knowledge than an application protocol is based on ISO8583 does convey some information. I immediately know that there's going to be a jumble of fields whose presence is defined by a bitmap of some sort. I know that I'll have to deal with things like processing code, STAN, RRN, POS code and other related fields that I may have dealt with before in the past. I also know that I have a good parsing facility at my disposal and I can use it to implement the protocol.

But that's about it. Some people indicate that application protocols based on ISO8583 are dialects of a protocol. Although this sometimes appears to be a legitimate analogy, I would say that ISO8583 defines the consonants and vowels of a language. People that speak the same language with different dialects can communicate after a fashion. They'll find some common ground, a verbal subset with common meaning to them both.

Computers speaking different flavors of ISO8583 cannot communicate at all. It only takes one tiny bit of difference to throw everything out the window. ASCII or EBCDIC? How is the bitmap packed? Is that LLVar or LLLVar? Is this field mandatory, optional or conditional and what's the condition defining its presence or absence? What's that private data field, I don't know how to decode it!

Questions like these have to be, not only answered, but also asked. I've seen application protocols based on ISO8583 that are crafted for POS transactions only. Or ATM transactions only. There are ISO8583-based protocols that cater for a multitude of transactions and there are others that handle only balance inquiry and cash advance. Given the structure, capabilities and degree of configuration of your ISO8583 parser, some ISO8583-based application protocols may be implemented in a couple of weeks. Others may need several months. Being based on ISO8583 doesn't tell the whole tale.

2011/03/05

The sun is beginning to set


I never understood the way ACI handled the Base24 sunset. When the policy was first announced I was amazed. The need to move forward is understandable. But why endanger your leading market position in a way that creates opportunities for your competitors and displeases your customers at the same time?

Well, maybe there were other considerations as well. Perhaps ACI needed to focus their resources on Base24-eps and cannot effectively do that by supporting two major and very different products at the same time. Maybe they did their homework and realized that Base24 classic, while undeniably being a great cash cow and the product with the largest market share, is a dying beast which is slowly being displaced by more modern systems. Perhaps they projected into the future and, after seeing that they're losing their leading position one installation at a time, discovered that a radical change in policy was both inevitable and necessary.

Whatever the reasons, it's now official: the sunset of Base24 classic has forced customers to get out there and look for alternatives. They may not be out there not en masse, and they may have totally different outlooks depending on corporate culture, but it's definitely happening.

I've recently gone through the exercise of responding to an RFP for Base24 classic replacement. Three things are very clear.

  1. The customer is aggravated. The feeling is not exactly oozing out of the pages of the RFP, but it's clear that they didn't like their hand being forced.
  2. The customer is aware that a lengthy migration is inevitable, they have no delusions about that.
  3. The existing Base24 classic installation was modified to incredible lengths, which is understandable if one realizes that the system has been in place for well over a decade.
I don't know what will the result of the RFP process be. Perhaps the customer will opt to continue with ACI and migrate to Base24-eps or choose another system from a different vendor and start fresh. One thing that I doubt is their staying with Base24 classic. I'm betting that the CFO will be part of the evaluation process and have his say, evaluating license and services costs and breaking them down to a five-year ROI sheet to see if the change is warranted from a financial standpoint of view. Monetary concerns are very important, especially in the current environment. But the writing on the wall is clear: Base24 classic is not here to stay. It's not dead but it may start to smell funny very soon.