For most physicians and surgeons, cultivating good relationships with hospitals only became important in the twentieth century. Late-nineteenth-century advice books on professionalism for physicians seldom mentioned hospitals, for example. Instead, they provided specifics on what seemed of greater moment: proper manners with patients, professional dress, lettering on the office window, and etiquette of professional interactions. From the 1890s onward, however, developments in “scientific” surgery—widespread general anesthesia, aseptic operating techniques, refined handling of tissues, surgical pathology—favored hospitals, as even opulent private homes no longer proved adequate for surgical procedures.
The idea that one needed hospitals to control all phases of life-threatening medical episodes has only arisen since World War II, however. As late as the early 1950s, the way internists cared for patients with heart attacks—opiates and extended bed rest—was closer to Galen’s approach than to today’s. Most medical diagnostic and therapeutic interventions could be performed in physicians’ offices, private labs, and patients’ homes.
The convergence of three major factors brought hospitals to the fore for everyone from the late 1950s onward. First, aggressive medical and surgical treatments—such as chemotherapy regimens, extended courses of IV antibiotics, and increasingly invasive surgeries for breast cancer began going mainstream. By the late 1960s intensive-care technologies were becoming common, and patients and physicians wanted more of them. Congress, the executive branch, and state legislatures obliged with expanded federal subsidies for hospital-building programs and tax-exempt bonds.
What cemented the centrality of hospitals was the passage of Medicare and Medicaid (Public Law 89-97) in 1965. By 1966, more than 19 million individuals ages 65 and older were enrolled. Premiums were modest: in 1970, Medicare Part A, which covered hospitalizations, had a deductible of $52 per year, and Part B (supplemental medical insurance) charged a monthly premium of $4. Nonetheless, for hospitals, especially urban ones, Medicare and Medicaid caused a sea change. What had been for centuries their most problematic service group—the elderly indigent—in a trice had become, now that their care was monetized, desirable.
Almost everything Medicare supported was done on a “cost-plus basis,” though it wasn’t called that. It didn’t take long for community hospitals, academic medical centers, and doctors to realize that “cost-plus” reimbursements and no overall budget caps meant they could realize their big dreams, especially those that required expensive facilities, staff, and trainees, and then dream new ones.
From the perspective of hospital leaders, medical school deans, and physicians, it was perhaps best of all that Medicare made few demands and asked few substantive questions. Throughout the 1970s and for most of the 1980s, as long as the paperwork was legit, the bills got paid. Composition of the medical workforce provides a telling example. Previously, clinical training programs had been the financial obligation of hospitals; now Medicare paid, and the programs expanded considerably. New specialties emerged and older ones spawned subspecialties. Training programs extended in time, and billing codes paid sizeable premiums for specialist care. For several decades, Medicare paid senior physicians for services to patients that had been rendered completely by their trainees, an example of a single Medicare detail that enabled a generation of procedural specialists to make not merely a good living, but to become genuinely wealthy. In most instances, insurance companies followed Medicare’s lead.
Meanwhile, although federal legislation (Hill-Burton) may have been winding down its direct subsidies of hospital building programs, legal decisions, tweaks in Medicare hospital payment formulas, and state legislation meant hospitals still had access to abundant capital at comparatively cheap rates. They, like the schools, professional organizations, and many physicians, adopted an approach of atomized hustle and scramble as they sought to maximize their individual situations.
It should be no wonder that the period from 1965 through the mid-1980s, an interval when almost every current senior leader in U.S. health care underwent professional formation, is often remembered by them as the “Golden Age” of U.S. medicine. For them, Medicare and hospitals have been the center of the medical universe. Present-day leaders, whether in the hospitals, insurance companies, Congress, the professional organizations, or the schools, often have difficulty imagining a world in which cost-plus reimbursement and laissez-faire behavior do not coexist in happy symbiosis.
Throughout much of its life, Medicare’s approaches to physicians and hospitals have been rife with internal contradictions. Soon after it started, Medicare expanded offerings, a practice that has continued, notably in the very expensive Public Law 108-173—the Medicare Prescription Drug, Improvement and Modernization Act of 2003. Beginning in 1980, though, Medicare also began systematically trying to limit growth in payments to hospitals and physicians, and that trend finds expression in numerous places in the massive health care reform legislation of 2009 and 2010.
Except where Medicare decides to pursue a particular subject, such as the number of days it will pay an acute-care hospital for ventilator use by a patient, for example, laissez-faire continues as the default norm. Thus, although Medicare pays almost all the costs of training specialists, it exerts little influence on the shape of the U.S. medical workforce. Many health care organizations, medical schools, and the U.S. government talk up the value of primary care, but increasingly it exists in tatters in the United States, where it exists at all. One reason may be that primary care doesn’t pay well—not only for physicians, which is what is usually noted—but for hospitals, medical schools, and professional organizations.
Unintended consequences abound from the foregoing. For example, hospitals have endeavored for years to keep up with changes in Medicare reimbursement formulas (and shape them through their lobbying efforts). When the cost and duration of single hospitalizations seemed of prime importance to Medicare, hospitals vigorously promoted the value of case managers and hospitalist physicians. One unintended consequence for chronically ill patients—adults and children—and their community-based generalist physicians is that they gradually lose touch with each other, as the community doctors no longer spend much time in the hospital and hospitalists don’t work with them.
Medicare pays well for procedures, but it often pays poorly for spending time with patients, which is often what chronically ill patients need most. The patients, who may reside many miles from specialist medical centers, may live as medical orphans as a consequence. The situation is particularly poignant for families with children with serious chronic diseases.
For the frail elderly, a group that is disproportionately female, the relentless push by hospitals to reduce length of stay has often meant they spend their final months or year undergoing an agonizing shuttle from acute hospital bed to nursing home, only to be returned to the hospital days or weeks later. Specialists tend to their failing organs in hospital, and nurses oversee their custodial care in nursing homes. No one may know the patient’s preferences for comfort, function, and longevity, which means no one crafts a comprehensive treatment plan with them. Instead, frail patients with chronically failing organ systems experience aggressive interventions until their bodies can take no more, at which point they’re discharged to hospice or a nursing home to die. Medicare pays the bills.
Although it may be tempting to point at acute care hospitals as key shapers of health care, in reality they mostly function as pawns of Medicare. Medicare shapes much of the quotidian of office practice for most physicians and surgeons as well. But Medicare, giant as it is, increasingly resembles nothing so much as a partisan political football, a situation no other advanced country tolerates in health care financing. A biomedical-industrial complex has arisen in the U.S., and its entities unceasingly try to game the system even as they feed off it. It is not surprising that former Medicare execs often serve in senior capacities in organizations that depend on Medicare’s largesse. It also seems likely that hospitals will continue using their economies of scale and access to cheap capital to their advantage in relationships with their medical staffs. Those staffs, in turn, may splinter into factions: hospital-based workers in one group, with subgroups for contractors and employees, and another composed of community-based physicians for whom the hospital is becoming less central to a successful practice. All parties, including residents if present, will likely continue their atomized scramble and hustle.
In the United States more than anywhere else, patients (and their doctors) find themselves betwixt two value systems: one that casts the patient as consumer, which is what hospitals, drug companies, and health insurers like to promote, and one that views the patient (and family) as one-half of a patient-doctor relationship. That relationship in turn depends on the willingness and ability of physicians and nurses to provide their patients (and families) sound judgments in the face of uncertainty. To the degree physicians permit their judgments to be corrupted by other interests, including the self-serving ones of hospitals, “good doctoring” will prove a challenge. And medicine’s social contract, the unwritten but powerful basis of physicians’ social and cultural privileges, will continue to vitiate. Medicare takes no official position, but its passivity should not be taken for inaction, as its policies and practices shape almost every aspect of hospital and physician relationships.