【哈佛商業評論】爲什麼招聘者看重醫療保險

Why Do Employers Provide Health Care in the First Place?

In 2017, Americans spent $3.5 trillion on health care — a level nearly equal to the economic output of Germany, and twice as much as other wealthy countries spend per person, on average. Not only is this a problem for the people seeking care; it’s also a problem for the companies they work for. Currently, about half of Americans are insured through an employer, and in recent years companies have borne the financial brunt of rising costs. Frustrated, many employers have shifted the burden to workers, with average annual deductibles rising by more than 50% since 2013.

Illustration by Cami Dobrin

This isn’t sustainable for anyone. So it’s no wonder that firms like Amazon, Berkshire Hathaway, and JPMorgan Chase, as well as Walmart, have embarked on efforts to re-envision health care for their employees. Warren Buffett has even gone so far as to argue that health care costs hamper economic competitiveness more than taxes do.

<article-promo style="max-width: 100%; color: rgb(27, 27, 27); font-family: Georgia; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px;">Stay informed with emails about everything new we launch in this and other Big Idea projects.

Sign Up for Updates</article-promo>

How did the United States end up with such an expensive system? Unlike countries that have either government-provided health care or government-sponsored insurance, the U.S. system involves the interplay of employers, insurance companies, health care providers, consumers, and government. In order to understand the cost conundrum of America’s health care system today, you have to understand where the system began — and how increasing costs and technological advances have created new pressures and incentives over time.

Early 1900s: The First Health Insurance Plans Take Shape

In the 19th and early 20th centuries, medical care was largely ineffective. Many hospitals were charity institutions that functioned as shelters for people who could not be cared for at home, rather than places for people with acute injuries and illnesses to be treated. Physicians usually ministered to paying patients at home, since hospitals could be breeding grounds for infection. Because the care these doctors provided was basic, families did not face unexpectedly high health care costs and did not need the financial protection offered by health insurance;[ the average annual per capita spending on health care was about 5 in 1900](https://hsus.cambridge.org/HSUSWeb/HSUSEntryServlet), the equivalent of150 today.

In some European countries, health insurance developed earlier than in the U.S., but not because of the high cost of medical care. In 1883, German chancellor Otto von Bismarck enacted a health insurance system to stem socialist sentiment as he cemented German unification. The German system enabled workers to see a physician if they were sick, but, even more important, provided what we would today consider disability insurance: giving workers money if illness or injury prevented them from being on the job. In Britain, the 1911 National Insurance Act provided a sickness benefit and free medical treatment to British workers. Versions of these disability insurance programs, referred to as “sickness insurance,” also began forming in the U.S. around the same time, organized largely by trade unions and fraternal societies. While there was an early attempt during the Progressive Era to pass compulsory insurance at the state level, it never gained traction, and it died completely when anti-European sentiment rose during World War I.

In the first decades of the 20th century, medical treatment shifted out of the home; reforms in medical education led physicians to train and practice in hospitals, which housed state-of-the-art antiseptic surgical suites and new technologies such as X-rays. As more people sought treatment in hospitals, health care costs began to rise. By 1929, [average annual medical costs per person were 108](https://pdfs.semanticscholar.org/13f6/d170bf582ea2f3a63ebe5cbc56bff216d013.pdf), equal to about1,550 today. A stay in the hospital became out of reach for middle-class families.

Existing fire and casualty insurance companies were reluctant to offer medical coverage, because they viewed health as uninsurable and feared that people who might be more likely to need medical care would be the only ones buying insurance. This problem — known as adverse selection — was a big problem for insurance markets in the 1920s and 1930s (and still is one today). For insurance to be effective and affordable, both healthy people and people more likely to become ill must take part.

And that’s why employers started playing an outsize role.

1930s: Enter Hospital Payment Plans

Employment-based insurance developed in the U.S. primarily because offering insurance to groups of workers mitigates adverse selection. Ironically, it was not insurance companies that figured this out. Rather, the problem was solved in 1929 when Justin Ford Kimball, an administrator at Baylor University Hospital, devised a means to alleviate the financial pressure the hospital faced from unpaid hospital bills. During his time as the superintendent of Dallas schools, Kimball had developed a sickness benefit program for teachers. In his new role at Baylor, he developed a simple plan based on insurance principles to help people pay their hospital bills, and recruited Dallas teachers to test his theory. Under Kimball’s plan, Baylor would provide each teacher with 21 days of hospital care for a prepaid annual fee of $6. By selling health insurance to a group of employed teachers who were healthy enough to work, the plan ensured that the risk pool would not be overwhelmed by people who were likely to be sick.

Word of the Baylor plan’s success spread rapidly, and at a crucial time for hospitals. As the nation sunk into the Great Depression, hospital occupancy rates plummeted to as low as 50%. Desperate for revenue, numerous hospitals began to form their own prepayment plans. Eventually, the American Hospital Association (AHA) developed a logo for these plans to use, and the “Blue Cross” plans were born.

In addition to stemming adverse selection, Blue Cross also helped control costs by limiting so-called moral hazard, which occurs when having insurance coverage causes people to increase their use of health care services. Blue Cross initially covered only hospital bills and paid hospitals a set rate for a finite number of covered days, preventing patients from overusing the system. Blue Shield, which separately provided coverage for physicians’ charges, turned to a different method: paying a fixed dollar amount of a bill, with patients paying the difference. (This practice, called “balance billing,” may sound familiar. While it is rare today for consumers who see in-network providers, it can result in high bills for consumers who venture out of network, or who go to a hospital that is in network but are treated by an out-of-network physician.)

Commercial insurance companies, which had initially been reluctant to offer health insurance, witnessed the success of the Blues in conquering adverse selection and moral hazard and soon began to compete with the Blue Cross plans by offering insurance to employee groups. By 1940, roughly 9% of Americans had insurance coverage for hospital expenses.

World War II: The Rise of Modern Health Care Benefits

In the 1940s, a series of events ensured the expansion of the health insurance market and its employment-based nature. The tremendous mobilization of troops and resources during World War II led to a huge decline in unemployment, which fell to a low of 1.2% by 1944. In 1943, President Franklin D. Roosevelt signed Executive Order 9328, which limited the ability of firms to raise wages to attract increasingly scarce labor. The offering of health insurance, however, was exempted from this ruling. As a result, firms began to offer health benefit packages to secure workers. Unions also negotiated for health insurance on behalf of workers — a right that was assured in 1948 and 1949 when courts ruled in favor of steelworkers in two similar cases regarding health care coverage, one of which was later affirmed by the U.S. Supreme Court. These rulings, during a time when union membership rates were at their highest, played a key role in expanding employer-provided health insurance and other benefits.

The tax treatment of employer-sponsored health insurance also fostered the rapid growth of coverage. Employers were permitted to deduct health insurance contributions from their taxes as a cost of doing business, just like wages. But unlike wages, employer contributions to employee health insurance premiums were (and still are) considered exempt from employees’ taxable income, a ruling codified in the 1954 Internal Revenue Code. The tax treatment of health insurance led more Americans to be covered, and the coverage became more generous. In 1952, just before these changes in the tax code occurred, 47% of households had group health insurance. By 1957, nearly 66% of households had employment-based coverage.

1946–1965: Health Care Costs Rise

In the years following World War II, when the economy was strong, hospitals began placing an emphasis on expansion. In 1946, the Hill-Burton Act was passed, pumping billions of dollars into the construction of new hospitals. These facilities featured improved laboratories, operating suites, and equipment. With the advent of medical miracles like penicillin during the war, hospitals and physicians were eager to provide care, and Americans were just as eager to consume it.

But even as health insurance became more generous and more expensive, consumers were still insulated from health care costs, due to the reimbursement systems developed by Blue Cross and Blue Shield.

1960–1990s: After the Introduction of Medicare, Costs Rise Again

Unfortunately, the health insurance system didn’t change in response to increased expenses; in fact, a task force set up in 1963 by the AHA and the Blue Cross Association affirmed the use of a “cost-plus” reimbursement system, where hospitals were reimbursed for the cost of treating patients. Hospitals thus had carte blanche to charge patients at will, passing the bill along to insurers and employers.

The passage of Medicare in 1965 added even more fuel to the fire. To ensure physician participation in the program, Medicare reimbursed physicians based on a calculation of the “customary, prevailing and reasonable” fees within any given geographic area. With the program underwriting whatever fees doctors charged, the rate of increase in fees doubled. The rise in provider reimbursement costs combined with more patients obtaining health insurance for the first time proved to be expensive. Within four years of its implementation, Medicare resulted in a 37% increase in real health expenditures, with about half of that rise coming from the entry of new hospitals into the market and the other half coming from expansion of services. Between 1970 and 1980, health care spending increased at an average annual rate of 12%, leading overall expenditures to more than double.

In an attempt to stem medical cost inflation, Medicare switched from its cost-based reimbursement system to a system of fixed prospective payment in 1983. Under the new system, which most commercial insurance companies began following as well, Medicare reimbursed hospitals according to a predetermined fee schedule based on diagnosis. Under this system, a hospital’s revenue was a function of patient admissions, and incentives for volume-based care took priority.

Moreover, evidence suggests that as insurance expanded the market for health care, it generated incentives for increased development of technology. While some of this new technology represented a significant improvement over current treatments, other innovations, such as proton-beam therapy for prostate cancer, did not improve outcomes compared to existing procedures, while costing the system substantially more.

By 1990, Medicare payment reforms had only somewhat slowed the rate of growth in health care spending, with the average annual growth rate falling from 12.1% in the 1970s to 9.9% in the 1980s. At this point, 61.3% of Americans had private health insurance. Employers were starting to feel the pinch of rising health insurance costs, and they began to seek ways to ease them.

Their primary method was managed care. Numerous types of these arrangements flourished, ranging from true health maintenance organizations (HMOs), which integrated finance and delivery of care, to looser networks of preferred provider organizations, in which providers agreed to utilization review and discounted their fees. But without any meaningful changes in the U.S. health care system, costs for insurers and employers remained high. And the coming consolidation in the health care sector didn’t help matters.

2000 to Today: Consolidation and More Consolidation

Over the past 20 or so years, consolidation among both providers and insurers has reduced competition in health care. In 2016, 90% of metropolitan areas were considered highly concentrated for hospitals, with 65% concentrated for specialist physicians and 39% concentrated for primary care physicians. A recent report by the American Medical Association reports that 69% of markets have high insurance company concentration.

Less competition in markets causes prices to rise. One recent study shows that prices at monopoly hospitals are 12% higher than in markets with more competitors. Numerous regulatory barriers to competition exist in the pharmaceutical market, too, providing very little pricing transparency for both physicians and patients. Instead of negotiating directly with drug companies, insurance plans rely on pharmaceutical benefit managers (PBMs) to act as their intermediaries. PBMs negotiate drug prices and rebates with manufacturers on behalf of the insurance plans and create a covered list of drugs behind the scenes. This lack of transparency makes it difficult, if not impossible, for consumers to compare prices. In addition, increasing consolidation among PBMs has led to higher prices for prescription drugs over time.

As a result of these trends, employers have shifted costs to employees; one common example is the implementation of high-deductible insurance plans, which increase consumers’ out-of-pocket costs. High costs can hurt employees in other ways, too: there’s evidence that as employer-provided health costs rise, employers are constrained in their ability to increase wages.

. . .

The history of health insurance in the United States is a lesson in good intentions with unforeseen consequences — along with an inability or unwillingness to act when the consequences become clear. The combination of government-provided and private health insurance, including the Affordable Care Act and Medicaid, now covers 90% of the population, but as long as health care providers lack competition and profit from volume-based care, it’s unlikely that costs can be constrained. And when it comes to employer-based plans, costs are becoming untenable — and increasingly are shouldered by employees.The Big Idea

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章