In general what I have observed is, when memory_affinity is enabled, 64k pages tend to make unexpected paging space write activity more prominent. That may be due to some combination of issues such as those listed below :).
Sometimes folks will notice high CPU utilization by lrud (the page stealing least recently used daemon) or by psmd (page size management daemon) that correlates to poor performance. I contend that its not usually the CPU utilization and CPU shortage that is degrading performance... rather performance degradation is more likely due to paging space traffic and/or memory free frame waits. These are closely correlated to activity of lrud and psmd. Of course, in some cases it really is that tasks in the runqueue for the same core that is executing lrud/psmd are experiencing too much dispatch wait time as a result of that CPU being busy. But I don't think that's normally the case.
I've done lots of work on a database engine with very high concurrent activity where we chose to routinely disable 64k pages. In doing so, we avoided the potential paging space utilization and free frame waits associated with AIX defects such as those described in the APARs below.
Disabling 64k pages comes at a cost: TLB misses may increase and the added cost of handling all memory as 4k pages may degrade performance. But its something to be considered in a jam. If necessary, you can monitor the change in TLB misses in a critical workload.
Here are some ways that AIX 6 defects that can cause performance impact when 64k pages are enabled.
IZ42626: PSMD OPERATIONS ARE VERY SLOW APPLIES TO AIX 6100-02
IZ92561: 64K KERNEL HEAP CAUSES PAGING WHEN RAM IS OTHERWISE AVAILABLE APPLIES TO AIX 6100-03
IZ72031: 64K PAGING TAKING PLACE WHEN AVAILABLE SYSTEM RAM EXISTS APPLIES TO AIX 6100-03
IV13560: HIGH PSMD THREADS CPU CONSUMPTION APPLIES TO AIX 6100-06