CMS Announces Anti-Fraud Algorithms Will Begin Auditing Claims on July 1, 2011 Just As Insurance Companies Have Done For Years
Posted Jun 17 2011 10:06pm
This is actually a good thing that should have been done years ago and we will know on July 1st and there after how it will work. Claims are pretty much electronically scrubbed already by clearinghouses so now this is one more transaction process to audit and look for fraudulent patterns. It will be interesting to see the initial roll out as I am guessing much like any other auditing software that comes into play like this, adjustments to the SQL statements that run the queries looking for patterns may need to be adjusted, so the initial “algos” might be real “tight” or they might be real “loose”
I am guessing the 2nd option of being loose might be the starting point but don’t quote me on that, but it’s the way I would go and then tighten down later after a large amount of claims have passed through the system. In addition I am guessing with today’s technologies that some “machine learning” will be incorporated here too so those patterns are remembered and adjusted as needed when reporting takes place to watch for efficiencies. This along with the “Do Not Pay” data base should help out quite a bit.
It’s all about those algorithms and that’s why that word hold center stage at this blog and those who have been reading here for a while are already in tune with what all this means.
Northrup Grumman was the outsourced partner who developed the software auditing system. Just don’t ask the state of Virginia about them:) In addition they were also the contractor who created the “meaningful use” data base. With aerospace being slow contractors are looking and getting contracts to keep more of their own employed and this helps.
In addition, Federal Investigators will have access to such information to analyze and review reporting on such as this will help them do their job a bit better as when equipped with a large number of “potential fraudulent claims” they know where to go looking.
Hopefully too this will cut down the fraudulent claims billed for “dead doctors” along with using the Social Security Death Index. That is one thing the government does well as the Index is used by all as the #1 source of finding out if we are alive or not.
Now being the geek that I am here I wonder how this is going to work, in other words who’s servers is this going to run on? Will it be licensed to Medicare contractors (insurance companies) to run or will it be a web based cloud operation they connect with. I am thinking the last is the solution here but can’t say for sure but it’s the way labs and other auditing software connects today, so hopefully it is kept in house at CMS and does the work and reports back to the Medicare Contractors who have little incentives in this area being that with their work more than one subsidiary of their companies are involved and finding fraud sometimes cuts revenue to another of their divisions.
This is an item that is just now being recognized with all the mergers and acquisitions that have taken place with potential conflicts of interest. CMS really did need their own system as relying on contractors was not going to do much if catching fraud ended up costing revenue from another subsidiary involved in processing claims, and that happens. A couple months ago Harvard said they were going to do a study on the affect of mergers and acquisitions in healthcare and I said “what took so long” as I have been doing “subsidiary watch” posts here for about 3 years now. Fraudulent claims pay transaction fees just like the good claims so again no incentive to cut those down and catch the thieves, that is unless it is their own commercial claims and then they are all over that.
For a good example of “false positives” you can read this post from 2010 with what is now a big legal suit from several dermatologists in San Diego who were shut down via Ingenix (United Healthcare) algorithms. Within 5 days without warning all insurers quit paying them and this is a huge mess and I don’t know the outcome of the case but the link below will tell you how the algorithms affected all their businesses and even closed a few practices as the offices had already rendered all the services and pitted patients against doctors on the money side This is a story that should not have happened and shows the bad side of lack of communication and could have been handled different.
Here’s one more example to where a contracted 3rd party used by Blue Cross was caught with their algorithms that denied care, so again this is big stuff to have the algorithm work properly and find fraud patterns and not use it as 100% grounds for denying claims.
Being this is a CMS government program though the good news is that you do have a say and can appeal unlike the processes of commercial insurers who deny claims and have to fight tooth and nail to even get a review. It’s been in the press many times about how these algorithms can lead to false positives and why I think a loose rule to begin with would be the best and adjust the algorithms as more history is built as claims pass through. All the processes give you a “score” either in your information or in the claim content.
If we could only end up certifying payment algorithmic processes like we do with medical records we would be miles ahead in knowing what to expect, but again with algorithms that can be changed, as well as algorithmic business models in 24-48 hours and rolled out, we are all still at the mercy of what mathematics the insurers use that day. Again I am hoping the appeal process though will NOT model what the insurers are doing, since this is a CMS program. We can learn a lot from private industry but should stop short and either change or adjust those formulas that don’t work on the government side. Would that not be a grand day for all to certify insurer payer algorithms so we know what to expect and have them digitally filed for us to reference? You can bet it’s probably not going to happen though sadly. I have written about that topic before and we still have “Forest Gump” claim processing with insurers in just not knowing what we are going to get.
So when the new rule goes into effect and claims are audited for potential fraud, let’s hope it rolls in smoothly and doesn’t create a bottleneck with algorithms set too “tight” from the onset. I will end this post with an article I wrote way back in 2009 and it is still needed today, a Department of Algorithms or something along that line so we can finally have some real transparency. I was probably once again too far ahead of my time being the former algorithm writer that I am. BD
On the heels of the White House launch of the Campaign to Cut Waste - an administration wide initiative to crack down on waste, fraud and abuse, the Centers for Medicare & Medicaid Services (CMS) announced today that starting July 1, it will begin using innovative predictive modeling technology to fight Medicare fraud. Similar to technology used by credit card companies, predictive modeling helps identify potentially fraudulent Medicare claims on a nationwide basis, and help stop fraudulent claims before they are paid. This initiative builds on the new anti-fraud tools and resources provided by the Affordable Care Act that are helping move CMS beyond its former “pay & chase” recovery operations to an approach that focuses on preventing fraud and abuse before payment is made.
“President Obama is committed to hunting down and eliminating waste, fraud and abuse throughout the federal government,” said HHS Secretary Kathleen Sebelius. “Our work to fight Medicare fraud is an important part of the Obama Administration’s effort to root out wasteful spending and change the way government does business.”
“Today’s announcement is bad news for criminals looking to take advantage of our seniors and defraud Medicare,” said CMS Administrator Donald Berwick, M.D. “This new technology will help us better identify and prevent fraud and abuse before it happens and helps to ensure the solvency of the Medicare Trust Fund.”