

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Dec 17, 2023 • 27min
AF - Bounty: Diverse hard tasks for LLM agents by Beth Barnes
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bounty: Diverse hard tasks for LLM agents, published by Beth Barnes on December 17, 2023 on The AI Alignment Forum.
Summary
We're looking for (1) ideas, (2) detailed specifications, and (3) well-tested implementations for tasks to measure performance of autonomous LLM agents.
Quick description of key desiderata for tasks:
Not too easy: would take a human professional >2 hours to complete, ideally some are >20 hours.
Easy to derisk and ensure that there aren't weird edge cases or bugs: we want to be able to trust that success or failure at the task is a real indication of ability, rather than a bug, loophole, or unexpected difficulty in the task
Plays to strengths of LLM agents: ideally, most of the tasks can be completed by writing code, using the command line, or other text-based interaction.
See desiderata section for more explanation of what makes an ideal task.
Important info
WE DON'T WANT TASKS OR SOLUTIONS TO END UP IN TRAINING DATA. PLEASE DON'T PUT TASK INSTRUCTIONS OR SOLUTIONS ON THE PUBLIC WEB
Also, while you're developing or testing tasks, turn off chatGPT history, make sure copilot settings don't allow training on your data, etc.
If you have questions about the desiderata, bounties, infra, etc, please post them in the comments section here so that other people can learn from the answers!
If your question requires sharing information about a task solution, please email
task-support@evals.alignment.org instead.
Exact bounties
1. Ideas
Existing list of ideas and more info
here.
We will pay $20 (as 10% chance of $200) if we make the idea into a full specification
2. Specifications
See example specifications and description of requirements
here.
We will pay $200 for specifications that meet the requirements and that meet the desiderata well enough that we feel excited about them
3. Task implementations
See guide to developing and submitting tasks
here.
We will pay $2000-4000 for fully-implemented and tested tasks that meet the requirements and that meet the desiderata well enough that we feel excited about them
4. Referrals
We will pay $20 (as 10% chance of $200) if you refer someone to this bounty who submits a successful task implementation. (They'll put your email in their submission form)
To make a submission, you'll need to download the starter pack, then complete the quiz to obtain the password for unzipping it. The starter pack contains submission links, templates for task code, and infrastructure for testing + running tasks.
Anyone who provides a task implementation or a specification that gets used in our paper can get an acknowledgement, and we will consider including people who implemented tasks as co-authors.
We're also excited about these bounties as a way to find promising hires. If we like your submission, we may fast-track you through our application process.
Detailed requirements for all bounties
Background + motivation
We want tasks that, taken together, allows us to get a fairly continuous measure of "ability to do things in the world and succeed at tasks". It's great if they can also give us information (especially positive evidence) about more specific threat models - e.g. cybercrime, or self-improvement. But we're focusing on the general capabilities because it seems hard to rule out danger from an agent that's able to carry out a wide variety of multi-day expert-level projects in software development, ML, computer security, research, business, etc just by ruling out specific threat stories.
We ideally want the tasks to cover a fairly wide range of difficulty, such that an agent that succeeds at most of the tasks seems really quite likely to pose a catastrophic risk, and an agent that makes very little progress on any of the tasks seems very unlikely to pose a catastrophic risk.
We also want the distribution of task difficulty to be fair...

Dec 16, 2023 • 8min
LW - "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity by Thane Ruthenis
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity, published by Thane Ruthenis on December 16, 2023 on LessWrong.
When discussing AGI Risk, people often talk about it in terms of a war between humanity and an AGI. Comparisons between the amounts of resources at both sides' disposal are brought up and factored in, big impressive nuclear stockpiles are sometimes waved around, etc.
I'm pretty sure it's not how that'd look like, on several levels.
1. Threat Ambiguity
I think what people imagine, when they imagine a war, is Terminator-style movie scenarios where the obviously evil AGI becomes obviously evil in a way that's obvious to everyone, and then it's a neatly arranged white-and-black humanity vs. machines all-out fight. Everyone sees the problem, and knows everyone else sees it too, the problem is common knowledge, and we can all decisively act against it.[1]
But in real life, such unambiguity is rare. The monsters don't look obviously evil, the signs of fatal issues are rarely blatant.
And if you're not that sure, well...
Better not act up. Better not look like you're panicking. Act very concerned, sure, but in a calm, high-status manner. Provide a measured response. Definitely don't take any drastic, unilateral actions. After all, what if you do, but the threat turns out not to be real? Depending on what you've done, the punishment inflicted might range from embarrassment to complete social ostracization, and the fear of those is much more acute in our minds, compared to some vague concerns about death.
And the AGI, if it's worth the name, would not fail to exploit this. Even when it starts acting to amass power, there would always be a prosocial, plausible-sounding justification for why it's doing that. It'd never stop making pleasant noises about having people's best interests at heart. It'd never stop being genuinely useful to someone. It'd ensure that there's always clear, unambiguous harm in shutting it down. It would ensure that the society as a whole is always doubtful regarding its intentions - and thus, that no-one would feel safe outright attacking it.
Much like there's no fire alarm for AGI, there would be no fire alarm for the treacherous turn. There would never be a moment, except maybe right before the end, where "we must stop the malign AGI from killing us all!" would sound obviously right to everyone. This sort of message would always appear a bit histrionic, an extremist stance that no respectable person would shout out. There would always be fear that if we act now, we'll then turn around and realize that we jumped at shadows. Right until the end, humans will fight using slow, ineffectual, "measured" responses.
The status-quo bias, asymmetric justice, the Copenhagen Interpretation of Ethics, threat ambiguity - all of that would be acting to ensure this.
There's a world of difference between 90% confidence and 99% confidence, when it comes to collective action. And the AGI would need to screw up very badly indeed, for the whole society to become 99% certain it's malign.
2. Who Are "We"?
Another error is thinking about a unitary response from some ephemeral "us". "We" would fight the AGI, "we" would shut it down, "we" would not give it power over the society / the economy / weapons / factories.
But who are "we"? Humanity is not a hivemind; we don't even have a world government. Humans are, in fact, notoriously bad at coordination. So if you're imagining "us" naturally responding to the threat in some manner that, it seems, is guaranteed to prevail against any AGI adversary incapable of literal mind-hacking...
Are you really, really sure that "we", i. e. the dysfunctional mess of the human civilization, are going to respond in this manner? Are you sure you're not falling prey to the Typical Mind Fallacy, when you're imagining a...

Dec 16, 2023 • 2min
EA - EA for Christians 2024 Conference in D.C. | May 18-19 by JDBauman
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA for Christians 2024 Conference in D.C. | May 18-19, published by JDBauman on December 16, 2023 on The Effective Altruism Forum.
EACH's annual conference is its largest event all year, bringing together 100 Christians in the EA movement for talks, workshops, 1-on-1s, and prayer.
Apply here and share with Christians you know.
We welcome our keynote speaker Caleb Watney, cofounder of the Institute for Progress.
____
This conference is a fantastic opportunity to:
Hear from excellent speakers about themes relevant to the intersection of EA & Christianity.
Meet some of the over 500 Christians in the global effective altruism movement
Build your plans to improve the world, especially through high-impact careers and effective giving.
This event is primarily an in-person event. Online attendees may participate in networking, and talks will be recorded and shared.
See talks from last year here.
If you are a Christian who is interested in effective altruism, then we would love to see you there. While this event is primarily for Christians, we welcome anyone sincerely interested in the intersection of Christianity and EA. We hope to learn from you as you learn from us.
Curious about EA for Christians? Want to learn more?
Check out any of these:
Website: www.eaforchristians.org/about-us
Facebook group: www.facebook.com/eaforchristians/groups/
Newsletter: http://eepurl.com/ds6IMb
Our Linktree: https://linktr.ee/effectivealtruismforchristians
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 16, 2023 • 40sec
LW - Talking With People Who Speak to Congressional Staffers about AI risk by Eneasz
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talking With People Who Speak to Congressional Staffers about AI risk, published by Eneasz on December 16, 2023 on LessWrong.
A conversation with Jason Green-Lowe and Jakub Kraus of the Center for AI Policy. They've met with 50+ congressional staffers in DC about AI regulation efforts and are in the process of drafting a model bill. This is a somewhat entry-level conversation, good for getting an idea of what's going on over there without getting very wonky.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 16, 2023 • 1min
EA - The Global Fight Against Lead Poisoning, Explained (A Happier World video) by Jeroen Willems
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Global Fight Against Lead Poisoning, Explained (A Happier World video), published by Jeroen Willems on December 16, 2023 on The Effective Altruism Forum.
In this video, I explore the issue of lead poisoning with turmeric adulteration as the angle.
I interviewed Drew McCartor from Pure Earth, Rachel Silverman from the Center for Global Development and Kris Newby who reported on turmeric adulteration for Stanford. I also visited a lab to actually test my own turmeric!
Would love to hear what you think!
Thanks to everyone who provided valuable feedback.
Charities mentioned
Pure Earth
LEEP
Center For Global Development
Sources
The Vice of Spice by Wudan Yan for Undark
Dylan Matthews for Vox
Kris Newby for Stanford Magazine
Link to a transcript with the rest of the sources is in progress!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Dec 16, 2023 • 4min
EA - What is the current most representative EA AI x-risk argument? by Matthew Barnett
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the current most representative EA AI x-risk argument?, published by Matthew Barnett on December 16, 2023 on The Effective Altruism Forum.
I tend to disagree with most EAs about existential risk from AI. Unfortunately, my disagreements are all over the place. It's not that I disagree with one or two key points: there are many elements of the standard argument that I diverge from, and depending on the audience, I don't know which points of disagreement people think are most important.
I want to write a post highlighting all the important areas where I disagree, and offering my own counterarguments as an alternative. This post would benefit from responding to an existing piece, along the same lines as Quintin Pope's article "My Objections to "We're All Gonna Die with Eliezer Yudkowsky"". By contrast, it would be intended to address the EA community as a whole, since I'm aware many EAs already disagree with Yudkowsky even if they buy the basic arguments for AI x-risks.
My question is: what is the current best single article (or set of articles) that provide a well-reasoned and comprehensive case for believing that there is a substantial (>10%) probability of an AI catastrophe this century?
I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought.
I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about:
How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first place
Whether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humans
How likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalition
Whether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humans
Whether the AIs themselves will be significant moral patients, on par with humans
Whether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish line
Whether we get only "one critical try" to align AGI
Whether "AI lab leaks" are an important source of AI risk
How likely AIs are to kill every single human if they are unaligned with humans
Whether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of years
How bad problems related to "specification gaming" will be in the future
How society is likely to respond to AI risks, and whether they'll sleepwalk into a catastrophe
However, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that,
AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined"
I think a benign "AI takeover" event is very likely even if we align AIs successfully
AIs will likely be goal-...

Dec 16, 2023 • 11min
AF - Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem by Ansh Radhakrishnan
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem, published by Ansh Radhakrishnan on December 16, 2023 on The AI Alignment Forum.
Thanks to Roger Grosse, Cem Anil, Sam Bowman, and Tamera Lanham for helpful discussion and comments on drafts of this post.
Two approaches to addressing weak supervision
A key challenge for adequate supervision of future AI systems is the possibility that they'll be more capable than their human overseers. Modern machine learning, particularly supervised learning, relies heavily on the labeler(s) being more capable than the model attempting to learn to predict labels. We shouldn't expect this to always work well when the model is more capable than the labeler,[1] and this problem also gets worse with scale - as the AI systems being supervised become even more capable, naive supervision becomes even less effective.
One approach to solving this problem is to try to make the supervision signal stronger, such that we return to the "normal ML" regime. These
scalable
oversight approaches aim to amplify the overseers of an AI system such that they are more capable than the system itself. It's also crucial for this amplification to persist as the underlying system gets stronger. This is frequently accomplished by using the system being supervised as a part of a more complex oversight process, such as by
forcing it to argue against another instance of itself, with the additional hope that verification is generally easier than generation.
Another approach is to make the strong student (the AI system) generalize correctly from the imperfect labels provided by the weak teacher. The hope for these
weak-to-strong generalization techniques is that we can do better than naively relying on unreliable feedback from a weak overseer and instead access the latent, greater, capabilities that our AI system has, perhaps by a simple modification of the training objective.
So, I think of these as two orthogonal approaches to the same problem: improving how well we can train models to perform well in cases where we have trouble evaluating their labels. Scalable oversight just aims to increase the strength of the overseer, such that it becomes stronger than the system being overseen, whereas weak-to-strong generalization tries to ensure that the system generalizes appropriately from the supervision signal of a weak overseer.
I think that researchers should just think of these as the same research direction. They should freely mix and match between the two approaches when developing techniques. And when developing techniques that only use one of these approaches, they should still compare to baselines that use the other (or a hybrid).
(There are some practical reasons why these approaches generally haven't been unified in the past. In particular, for scalable oversight research to be interesting, you need your weaker models to be competent enough to follow basic instructions, while generalization research is most interesting when you have a large gap in model capability. But at the moment, models are only barely capable enough for scalable oversight techniques to work.
So you can't have a large gap in model capability where the less-capable model is able to participate in scalable oversight. On the other hand, the OpenAI paper uses a GPT-2-compute-equivalent model as a weak overseer, which has a big gap to GPT-4 but is way below the capability required for scalable oversight techniques to do anything. For this reason, the two approaches should still probably be investigated in somewhat different settings for the moment.
Here are some examples of hybrid protocols incorporating weak-to-strong techniques and scalable oversight schemes:
Collect preference comparisons from a language model and a human rater, but allow the rate...

Dec 16, 2023 • 2min
AF - Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision by leogao
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision, published by leogao on December 16, 2023 on The AI Alignment Forum.
Links: Blog, Paper.
Abstract:
Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior - for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too difficult for humans to reliably evaluate; humans will only be able to weakly supervise superhuman models.
We study an analogy to this problem: can weak model supervision elicit the full capabilities of a much stronger model? We test this using a range of pretrained language models in the GPT-4 family on natural language processing (NLP), chess, and reward modeling tasks. We find that when we naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors, a phenomenon we call weak-to-strong generalization.
However, we are still far from recovering the full capabilities of strong models with naive finetuning alone, suggesting that techniques like RLHF may scale poorly to superhuman models without further work. We find that simple methods can often significantly improve weak-to-strong generalization: for example, when finetuning GPT-4 with a GPT-2-level supervisor and an auxiliary confidence loss, we can recover close to GPT-3.5-level performance on NLP tasks. Our results suggest that it is feasible to make empirical progress today on a fundamental challenge of aligning superhuman models.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Dec 16, 2023 • 22min
EA - #175 - Preventing lead poisoning for $1.66 per child (Lucia Coulter on the 80,000 Hours Podcast) by 80000 Hours
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #175 - Preventing lead poisoning for $1.66 per child (Lucia Coulter on the 80,000 Hours Podcast), published by 80000 Hours on December 16, 2023 on The Effective Altruism Forum.
We just published an interview: Lucia Coulter on preventing lead poisoning for $1.66 per child. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
I always wonder if one part of it is just the really invisible nature of lead as a poison. Of course impacts aren't invisible: millions of deaths and trillions of dollars in lost income. But the fact that lead is the cause is not apparent. It's not apparent when you're being exposed to the lead. The paint just looks like any other paint; the cookware looks like any other cookware.
And also, if you are suffering the effects of lead poisoning, if you have cognitive impairment and heart disease, you're not going to think, "Oh, it was that lead exposure." It's just not going to be clear.
Lucia Coulter
Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they'll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.
We've known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.
Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children's intellectual potential, health, and life expectancy is vast - the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.
This week's guest, Lucia Coulter - cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) - speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.
Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people's lifetime income anywhere from $300-1,200 for each $1 it spends, by preventing intellectual stunting.
Which raises the question: why hasn't this happened already? How is lead still in paint in most poor countries, even when that's oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?
With host Robert Wiblin, Lucia answers all those questions and more:
Why LEEP isn't fully funded, and what it would do with extra money (you can donate here).
How bad lead poisoning is in rich countries.
Why lead is still in aeroplane fuel.
How lead got put straight in food in Bangladesh, and a handful of people got it removed.
Why the enormous damage done by lead mostly goes unnoticed.
The other major sources of lead exposure aside from paint.
Lucia's story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship's Incubation Program.
Why Lucia pledges 10% of her income to cost-effective charities.
Lucia's take on why GiveWell didn't support LEEP earlier on.
How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.
Genera...

Dec 15, 2023 • 10min
LW - Contra Scott on Abolishing the FDA by Maxwell Tabarrok
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Scott on Abolishing the FDA, published by Maxwell Tabarrok on December 15, 2023 on LessWrong.
Scott Alexander's
recent post on the FDA raises the average level of discourse on the subject. He starts from the premise that the FDA deserves destruction but cautions against rash action. No political slogan can be implemented without clarification and "Abolish the FDA" is no different, but Scott's objections aren't strong reasons to stop short of a policy implementation that still retains the spirit behind the slogan.
Scott's preferred proposal is to essentially keep the authority and structure of the FDA the same but expand the definition of supplements and experimental drugs. This way, fewer drugs are illegal but there aren't big ripple effects on the prescription and health insurance systems that we have to worry about.
The more hardline libertarian proposal is to restrict the FDA's mandatory authority to labeling and make their efficacy testing completely non-binding. This would turn the FDA into a informational consumer protection agency rather than a drug regulator. They can slap big red labels on non-FDA approved drugs and invite companies to run efficacy tests to get nice green labels instead, but they can't prevent anyone from taking a drug if they want it.
Let's go through Scott's objections to the hardline plan and see if they give good reasons to favor one over the other.
Are we also eliminating the concept of prescription medication?
I can see some "If I were king of the world" overhauls to the health system that might do away with mandatory prescriptions, but I think the point of this exercise is to see if we can abolish the FDA without changing anything else and still come out ahead, accounting for costly second-order effects from the rest of the messed up health system. So no, the hardline "abolish the FDA" plan would not remove the legal barrier of prescription. Here is Scott's response:
But if we don't eliminate prescriptions, how do you protect prescribers from liability? Even the best medications sometimes cause catastrophic side effects. Right now your doctor doesn't worry you'll sue them, because "the medication was FDA-approved" is a strong defense against liability. But if there are thousands of medications out there, from miraculous panaceas to bleach-mixed-with-snake-venom, then it becomes your doctor's responsibility to decide which are safe-and-effective vs.
dangerous-and-useless. And rather than take that responsibility and get sued, your doctor will prefer to play it safe and only use medications that everyone else uses, or that were used before the FDA was abolished.
This is a reasonable concern, litigation pressure is a common culprit behind
spiraling regulatory burden. But in this case we can be confident that turning the FDA into a non-binding informational board won't turn prescriptions into an even higher legal hurdle because doctors already prescribe far outside of FDA approval.
When a drug is tested by the FDA it is tested as a treatment for a specific condition, like diabetes or throat cancer. If the drug is approved, it is approved only for the outcome measured in efficacy testing and nothing else. However, doctors know that certain drugs approved for one thing are effective at treating others. So they can issue an "off-label" prescription based on their professional opinion.
Perhaps 20% of all prescriptions in the US are made off-label and more than half of doctors make some off label prescriptions.
So doctors are clearly willing to leave the legal umbrella of FDA approval when they make prescription decisions. There are lots of high profile legal cases about off-label prescriptions but they are mostly about marketing and they haven't dampened doctor's participation in the practice. If doctors were comfortable enough to
pres...


