I got a question about PowerExchange throughput the other day. When we were first evaluating PowerExchange as a solution, this was a primary concern of ours. We devoted hundreds of hours designing, implementing, and refining a good test of our throughput. In the end, we over-tested. The throughputs that we dealt with in the test were so much more than our production systems handled, even during peak load, that the actual system handles its load with great ease. Of course, sometimes that’s the point of load testing: to determine theoretical and not actual limits.

Our physical environment
We have a source (PeopleSoft) database running on Oracle 9i/Solaris. A target (ODS) database running on Oracle 9i/Solaris. And we have PowerExchange running on the same Windows 2003 box as PowerCenter. In mid-July, we’ll be upgrading both source and target databases to 10g.

Our data environment
Our current change data capture mappings captures 10 source tables and writes to 30 targets. 3 for each source table. 6 of these tables are considered “wide” tables. These are also the tables with the most DML activity in our PeopleSoft system. One has 23 columns totalling 2.2 kb/row. Another has 148 columns totaling 1.9 kb/row. You get the picture. Lots of data per row. We are capturing every byte in our ODS, plus some extra LogMiner columns like cap_tm.

We don’t have a high, sustained throughput in our system. Our high throughput periods are when batch cycles complete. This is usually nights/weekends when source/ODS latency is not as much of a concern.

Our PowerExchange settings
After discussing our environment with Informatica, we’ve come up with a couple tuning parameters for our system. These are tuned for lower latency, not higher throughput.

  1. All of our sources/targets in the same mapping.
    The fewer open sessions in LogMiner, the faster your reads are going to be. This one was non-negotiable since our primary directive (and reason for using PowerExchange instead of using Oracle triggers) was not to impact the source database.
  2. UOW COUNT
    This parameter in the source connection is set to 1. When PowerExchange reads one changed row from the source, it sends it directly to PowerCenter. While I don’t have any specific measurements, I seem to remember dramatically higher throughput processing batch commits when we had this set higher.
  3. Real-Time Flush Latency
    This is actually set to 10000ms, which I seem to remember was the default. No significance here, other then that when we did load testing, we used the flush latency and UOW count to balance high throughput with tolerable latency. If you have a generous service level agreement, like “Capture source data within five minutes of the change.” You can up your real-time flush latency and your UOW Count and have PowerCenter process mini-batches of rows. This would really improve throughput.

So, how much throughput?
This varies by table, of course, because the wider tables have lower throughput than the narrower ones. We have bursts of throughput of up to 4700 rows per second. But that number doesn’t really tell you much. 4700 rows of a 23 byte table isn’t much data. We’re not sure if that number is actually updates of our 23 byte table. Actually, I should look into that.

The clearer measure of throughput is data volume or bytes/sec. Not rows/sec.

The most frequent batch job we see happens to update our widest source table. We regularly see throughputs of up to 1200 rows per second in the mornings when this job commits. That’s 2.6 MB/s. Not bad. Our “standard” throughput after batch windows is between 500-1000 rows per second. Which is still between 1-2MB/s.

In summary, our biggest concern in using PowerExchange turned out to work very well. We were able to get by tuning very few variables (just the source connection in the Workflow Manager) without having to look into source/target tuning.

I invite your comments via email scolin@pobox.com or here.

Tags: ,

4 Responses to “PowerExchange Throughput”

  1. Trevor Tian says:

    Hi,

    I have some questions about your source instance, what about the size of redo log file, how many redo groups you have?

    regarding to throughput, I’d like to know how you difined the duration, like start time and end time. for instance, 500 rows per second, it meant you made 500 rows changed in source instance and committed it, then data was updated in target after 1 second, right?

    Thanks,
    Trevor

  2. Shane says:

    Trevor,

    Our source instance is about 250GB. There are four 100MB redo logs. During these batch cycles, we’re doing log switches — is “log switch” the correct term? — every 30 seconds. When we were testing throughput, we discussed up-ing the size of the redo to get faster reading. This was suggested by the DBA, not by us on the PowerExchange side. We never really tested this to see if it had any effect. Have you done any testing with this?

    These numbers are from our production instance. We don’t have the sort of control over source transactions we would in a test instance. I measure throughput by looking at the time stamp of the target transaction: SYSDATE. The pattern tells you when there is a batch commit. You’ll see a sudden spike, 600 rows one second, 550 the next, 550 again … and then it will drop to single digits again. So, yes we are seeing latency, but it’s certainly tolerable for us.

    Remember that PowerExchange is replicating data asynchronously. If sub-second latency is your goal, you might want to check out another product. We’re very happy to capture source data without impacting performance in our front-end applications.

    It’s difficult to test latency in this case because the timestamps of these source transactions (dtl_capxtimestamp) are not the timestamp of the commit but of the DML action in the batch. So we’ll see a spike with source timestamps ranging over the past several hours.

    At first glance, this told us we had a latency problem. Further clues — talking to application developers and seeing other transactions on the same table come through during that period — told us that this was the behavior of batch commits. Logminer “releases” the data on commit, not when the action occurs.

    I’m interested in hearing how you set up your tests. We spent a LOT of time figuring out how to set up good tests, and didn’t have many peers to talk to. I’m glad to hear from you.

  3. rohit says:

    Hi,
    How would having all sources in the same mapping help.All your sources will open multiple log miner oracle sessions though they are all configured in the same informatica session.Is my understanding correct?

    Thanks.

  4. Shane says:

    My memory is a little fuzzy at this point, because it’s been well over three years since I’ve worked with Informatica. If I recall, it was only opening one session in Oracle and issuing one query per source. It would round-robin between the different Informatica sources. I don’t even recall what the Oracle syntax was, but it was basically “SELECT * FROM WHERE table_name = ‘table1′”, and so one for each table.

    I do recall a specific test where the throughput under this one session set-up was much much faster.

    Sorry I can’t be more specific, but I don’t have anything available that would help me replicate a test to demonstrate.

Leave a Reply

You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>