This is a 12 lead electeocardiogram of a 26 year old male:

This is the graph of function $5sin(7x)sin(.5x)cos(3.25x)$ This graph look *quite* similar to the ECG diagram.After sketching this graph, I thought whether ECG diagram could be assigned *roughly* to a mathematical function.

**Question:** Has anyone ever tried to even roughly assign the diagram to a mathematical function? Is it even possible?

Try this:

$f(x)=-20(e^{left(operatorname{mod}left(x-10, 20 ight)-10 ight)}*(e^{5left(operatorname{mod}left(x-10, 20 ight)-10 ight)}-57*e^{4left(operatorname{mod}left(x-10, 20 ight)-10 ight)}+302*e^{3left(operatorname{mod}left(x-10, 20 ight)-10 ight)}-302*e^{2left(operatorname{mod}left(x-10, 20 ight)-10 ight)}+57*e^{left(operatorname{mod}left(x-10, 20 ight)-10 ight)}-1))/(e^{left(operatorname{mod}left(x-10, 20 ight)-10 ight)}+1)^7$

## Mathematical requirements and exemplifications

In order to be able to develop their skills, knowledge and understanding in Biology, students need to have been taught, and to have acquired competence in, the appropriate areas of mathematics as indicated in the table of coverage below.

Overall, at least 10% of the marks in assessments for biology will require the use of mathematical skills. These skills will be applied in the context of biology and will be at least the standard of higher tier GCSE mathematics.

The following tables illustrate where these mathematical skills may be developed during teaching or could be assessed. Those shown in **bold type** would only be tested in the full A-level course.

This list of examples is not exhaustive. These skills could be developed or assessed in other areas of specification content. Other areas where these skills could be developed have been exemplified throughout these specifications.

## Can a mathematical function be assigned to ECG diagram? - Biology

**Function Representations**

You are probably most familiar with the symbolic representation of functions, such as the equation,

Functions can be represented by tables, symbols, or graphs. Each of these representations has its advantages. Tables explicitly supply the functional values of specific inputs. Symbolic representation compactly state how to compute functional values. Graphs provide a visual representation of a function, showing how the function changes over a range of inputs .

Tables provide an easy means to compare the inputs and output of a given function. A complete table, listing all inputs and outputs, can only be used when there are a small number of inputs and outputs. A partial table can be used to list a few select inputs and outputs. This type of table often indicates the shape of the function, or indicates the pattern for generating the outputs from the inputs.

Complete tables can tell you if a given relation is a function or not. Consider the following complete table,

By inspection, we can see that the above table represents a function because each input corresponds to exactly one output. Do not be alarmed that the output *y* = &minus2 is listed twice. The fact that two different inputs gives rise to the same output does not violate the definition of a function. The table below, on the other hand, does not represent a function,

In this case, the input *x* = 3 gives rise to two different outputs, *y* = 1 and *y* = &minus1. This is also true for input *x* = 1 which corresponds to outputs *y* = 2 and *y* = &minus3. .

Functions are commonly represented symbolically because these representations are compact. An example of a symbolic representation is

In this case, we multiply each input *x* by 2 to get the corresponding output *y*.

Another example of a symbolic representation is

In this case, we take each input *x*, square it, and then add one.

**How do you know if a given equation represents a function?**

Not all equations are symbolic representations of functions. For example, consider the following equation,

Is *y* a function of *x* in the above equation? To determine if y is a function of *x*, it is convenient to solve for *y* as,

Now it is clear that y is not a function of *x* because for each valid input *x* (except *x* = 0), there are two outputs. For example, the input *x* = 4 results in the outputs

We will now explore graphical representations of functions. A graph is a way to visualize ordered pairs, (*x*, *y*), on a set of coordinate axes (the *xy*-plane). We will begin by showing the graphical representation of the function represented in the table,

We can draw the graph of this function by plotting the ordered pairs listed in the above table (i.e. (&minus3, 1), (&minus2,&minus2), (&minus1, 2), (0, 4), (1,&minus3), (2,&minus2), (3,&minus1)) as,

Notice that we do not connect the points because the table only gives us functional values of particular points. We do not know the functional values in between two points, such as *x* = &minus3 and *x* = &minus2. Therefore, we must assume that the function is not defined at these points. Even though we do not connect the points on the graph, it still represents a function because each input corresponds to exactly one output.

If we graph the points in the table,

we have the following graph,

Clearly, this graph indicates the assignment of multiple outputs to the inputs *x* = 1 and *x* = 3, and therefore does not represent a function. This example illustrates how graphs are a convenient way to represent relations because one can easily test whether or not a particular graph represents a function. If a graph represents a function then it will pass the **vertical line test**, which states that a set of points represents a function if and only if no vertical line intersects the graph at more than one point. This makes sense, because if an input, *x*, is assigned to *exactly* one output, *y*, then a vertical line, which corresponds to a single value of *x* will intersect the graph at only one point. If, on the other hand,a vertical line intersects the graph of *f* in more than one place, then *f* is not a function and fails the vertical line test. Using the vertical line test we can see that the previous graph does not represent a function,

Representing the Domain and Range of a Function

We will now look at two ways to visualize the domain and range of a function. We will begin with the following diagram of domain and range,

As you can see, the points in the set on the left hand side, the domain, are mapped by the function to points in the set on the right hand side, the range. That is, the inputs in the domain are mapped by *f* to the outputs in the range.

We can visualize the domain and range of a function graphically as follows,

The red arrows on the graph indicate that the graph extends out to infinity. The green arrows show the domain as being the entire real line (i.e. all real numbers or (-&infin, &infin)). The blue arrow shows the range of the function as being ( &minus2,&infin). Not all functions have domains that consist of all real numbers. Many functions are defined in such a way that certain inputs cannot be accepted. For instance, *x* = 0 is not in the domain of the function

because division by zero is an undefined operation. All other inputs are valid because division is defined for all real numbers except zero, and thus we write the domain as

As we explore the different functions individually, we will learn about their domains and ranges.

**In the next section we will describe some of the properties of functions.**

## Keywords

**About the Author**—Dr. STEVEN A. ISRAEL is a Senior Image and Pattern Recognition Scientist at Science Applications International Corporation (SAIC). Dr. Israel analyzes non-traditional data sets for a number of government, military, and academic organizations. He received a BS (1987) and MS (1991) from the State University of New York-College of Environmental Science and Forestry. His Ph.D. (1999) was granted from the Departments of Information Science and Surveying at the University of Otago, New Zealand. Dr. Israel's interests include image processing, biometrics, photogrammetry, classification, rugby, and pattern recognition.

**About the Author**—Dr. JOHN M. IRVINE is the Director of Imagery Systems at Science Applications International Corporation (SAIC). He has served as chief scientist for several programs for evaluation of assisted image exploitation systems, development, and evaluation of Automatic Target Recognition (ATR) and image understanding technology, and the application of remote sensing to a range of military and civil applications. He is currently the principle investigator for the development of new, non-traditional human identification techniques (biometrics) under the DARPA HumanID Program. Dr. Irvine received his Ph.D. in 1982 in mathematical statistics from Yale University.

**About the Author**—Mr. ANDREW C. CHENG is a Software Engineer with SAIC. He received a BS (1997) and ME (1998) in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. His interests include aspect-oriented programming, language-neutral software platforms such as .Net, and energy alternatives such as biodiesel.

## Thinking Contextually

### Overview

A wide range of visual and contextual activities are available to support the delivery of this topic. One of the activities (Mass transport system) links 2.2.1(a) with 2.2.4(a) by considering mass transport systems in both mammals and plants, also providing opportunity for embedding the mathematical concept of surface area to volume ratio (*M0.3, M4.1*).

Using dissection techniques (relates to **PAG2**) to look at both the internal and external structure of the mammalian heart ('Heart dissection' activity) is always well received by students and, when used in association with visual images of the heart, allows for consolidation and enhancement of existing knowledge.

Computerised animations to study heart action enable students to understand the differences between ‘electrical’ and ‘mechanical’ activity of the heart (see ‘Thinking Conceptually’ 1, 2, 3, 4, 5) and questions pertaining to this concept will also aid understanding (Heart function activity).

The opportunity for practical investigation arises with the study of heart rate and its effect on cardiac output (relates to **PAG10** and **PAG11**). Investigating different factors can be carried out using Daphnia, but students are encouraged to get actively involved when studying the effects of exercise. Those unable to participate physically can be assigned to the tasks of measuring and recording ('Heart rates' activity). Calculations associated with these practical activities will support mathematical concepts including estimation, use of appropriate units, arithmetic means and construction of tables (*M0.1, M0.4, M1.2, M1.3, M1.11, M2.3, M2.4, M3.1, M3.2*). The 'Secondary data' activity links results obtained during these practical investigations with secondary data and Learner Activity 6 uses examples of ECGs to enable students to develop skills of analysis and interpretation.

## Cell Signaling Ligands

Typically, cell signaling is either mechanical or biochemical and can occur locally. Additionally, **categories of cell signaling are determined by the distance a ligand must travel.** Likewise, hydrophobic ligands have fatty properties and include steroid hormones and vitamin D_{3}. These molecules are able to diffuse across the target cell’s plasma membrane to bind intracellular receptors inside.

On the other hand, hydrophilic ligands are often amino-acid derived. Instead, these molecules will bind to receptors on the *surface* of the cell. Comparatively, these polar molecules allow the signal to travel through the aqueous environment of our bodies without assistance.

## Access options

### Buy single article

Instant access to the full article PDF.

Tax calculation will be finalised during checkout.

## Can one explain schemes to biologists

John Tate and I were asked by Nature magazine to write an obituary for Alexander Grothendieck. Now he is a hero of mine, the person that I met most deserving of the adjective "genius". I got to know him when he visited Harvard and John, Shurik (as he was known) and I ran a seminar on "Existence theorems". His devotion to math, his disdain for formality and convention, his openness and what John and others call his naiveté struck a chord with me.

So John and I agreed and wrote the obituary below. Since the readership of Nature were more or less entirely made up of non-mathematicians, it seemed as though our challenge was to try to make some key parts of Grothendieck's work accessible to such an audience. Obviously the very definition of a scheme is central to nearly all his work, and we also wanted to say something genuine about categories and cohomology. Here's what we came up with:

Alexander Grothendieck

David Mumford and John Tate

Although mathematics became more and more abstract and general throughout the 20th century, it was Alexander Grothendieck who was the greatest master of this trend. His unique skill was to eliminate all unnecessary hypotheses and burrow into an area so deeply that its inner patterns on the most abstract level revealed themselves -- and then, like a magician, show how the solution of old problems fell out in straightforward ways now that their real nature had been revealed. His strength and intensity were legendary. He worked long hours, transforming totally the field of algebraic geometry and its connections with algebraic number theory. He was considered by many the greatest mathematician of the 20th century.

Grothendieck was born in Berlin on March 28, 1928 to an anarchist, politically activist couple -- a Russian Jewish father, Alexander Shapiro, and a German Protestant mother Johanna (Hanka) Grothendieck, and had a turbulent childhood in Germany and France, evading the holocaust in the French village of Le Chambon, known for protecting refugees. It was here in the midst of the war, at the (secondary school) Collège Cévenol, that he seems to have first developed his fascination for mathematics. He lived as an adult in France but remained stateless (on a "Nansen passport") his whole life, doing most of his revolutionary work in the period 1956 - 1970, at the Institut des Hautes Études Scientifique (IHES) in a suburb of Paris after it was founded in 1958. He received the Fields Medal in 1966.

His first work, stimulated by Laurent Schwartz and Jean Dieudonné, added major ideas to the theory of function spaces, but he came into his own when he took up algebraic geometry. This is the field where one studies the locus of solutions of sets of polynomial equations by combining the algebraic properties of the rings of polynomials with the geometric properties of this locus, known as a *variety*. Traditionally, this had meant complex solutions of polynomials with complex coefficients but just prior to Grothendieck's work, Andre Weil and Oscar Zariski had realized that much more scope and insight was gained by considering solutions and polynomials over arbitrary fields, e.g. finite fields or algebraic number fields.

The proper foundations of the enlarged view of algebraic geometry were, however, unclear and this is how Grothendieck made his first, hugely significant, innovation: he invented a class of geometric structures generalizing varieties that he called *schemes*. In simplest terms, he proposed attaching to *any* commutative ring (any set of things for which addition, subtraction and a commutative multiplication are defined, like the set of integers, or the set of polynomials in variables *x,y,z* with complex number coefficients) a geometric object, called the *Spec* of the ring (short for spectrum) or an affine scheme, and patching or gluing together these objects to form the scheme. The ring is to be thought of as the set of functions on its affine scheme.

To illustrate how revolutionary this was, a ring can be formed by starting with a field, say the field of real numbers, and adjoining a quantity (varepsilon) satisfying (varepsilon^2 = 0). Think of (varepsilon) this way: your instruments might allow you to measure a small number such as ( varepsilon = 0.001 ) but then ( varepsilon^2 = 0.000001) might be too small to measure, so there's no harm if we set it equal to zero. The numbers in this ring are (a+bcdotvarepsilon) with real *a,b*. The geometric object to which this ring corresponds is an infinitesimal vector, a point which can move infinitesimally but to second order only. In effect, he is going back to Leibniz and making infinitesimals into actual objects that can be manipulated. A related idea has recently been used in physics, for superstrings. To connect schemes to number theory, one takes the ring of integers. The corresponding Spec has one point for each prime, at which functions have values in the finite field of integers mod *p* and one classical point where functions have rational number values and that is 'fatter', having all the others in its closure. Once the machinery became familiar, very few doubted that he had found the right framework for algebraic geometry and it is now universally accepted.

Going further in abstraction, Grothendieck used the web of associated maps -- called morphisms -- from a variable scheme to a fixed one to describe schemes as *functors* and noted that many functors that were not obviously schemes at all arose in algebraic geometry. This is similar in science to having many experiments measuring some object from which the unknown real thing is pieced together or even finding something unexpected from its influence on known things. He applied this to construct new schemes, leading to new types of objects called *stacks* whose functors were precisely characterized later by Michael Artin.

His best known work is his attack on the geometry of schemes and varieties by finding ways to compute their most important topological invariant, their cohomology. A simple example is the topology of a plane minus its origin. Using complex coordinates ((z,w)), a plane has four real dimensions and taking out a point, what's left is topologically a three dimensional sphere. Following the inspired suggestions of Grothendieck, Artin was able to show how with algebra alone that a suitably defined third cohomology group of this space has one generator, that is the sphere lives algebraically too. Together they developed what is called *étale cohomology* at a famous IHES seminar. Grothendieck went on to solve various deep conjectures of Weil, develop *crystalline cohomology* and a meta-theory of cohomologies called *motives* with a brilliant group of collaborators whom he drew in at this time.

In 1969, for reasons not entirely clear to anyone, he left the IHES where he had done all this work and plunged into an ecological/political campaign that he called *Survivre*. With a breathtakingly naive spririt (that had served him well doing math) he believed he could start a movement that would change the world. But when he saw this was not succeeding, he returned to math, teaching at the University of Montpellier. There he formulated remarkable visions of yet deeper structures connecting algebra and geometry, e.g. the symmetry group of the set of all algebraic numbers (known as its Galois group Gal ( (overline

As a friend, Grothendieck could be very warm, yet the nightmares of his childhood had left him a very complex person. He was unique in almost every way. His intensity and naivety enabled him to recast the foundations of large parts of 21st century math using unique insights that still amaze today. The power and beauty of Grothendieck's work on schemes, functors, cohomology, etc. is such that these concepts have come to be the basis of much of math today. The dreams of his later work still stand as challenges to his successors.

The sad thing is that this was rejected as much too technical for their readership. Their editor wrote me that 'higher degree polynomials', 'infinitesimal vectors' and 'complex space' (even complex numbers) were things at least half their readership had never come across. The gap between the world I have lived in and that *even of scientists* has never seemed larger. I am prepared for lawyers and business people to say they hated math and not to remember any math beyond arithmetic, but this!? Nature is read only by people belonging to the acronym 'STEM' (= Science, Technology, Engineering and Mathematics) and in the Common Core Standards, all such people are expected to learn a hell of a lot of math. Very depressing.

Well, Nature magazine really wanted to publish *some* obit on Grothendieck and wore us out until we agreed with a severely stripped down re-edit. The obit is coming out, I believe, in the Jan.15 issue, and copyright prevents me from putting it here. The whole issue of trying to bridge the gap between the mathematician's world and that of other scientists or that of lay people is a serious one and I believe mathematicians could try harder to find bridges. An example is Gower's work on bases in Banach spaces: when he received the Fields Medal, no one to my knowledge used the example of musical notes to explain Fourier series and thus bases of function spaces to the general public.

The essential minimum I thought for a Grothendieck obit was to make some attempt to explain schemes and say something about cohomology. To be honest, the central stumbling block for explaining schemes was the word "ring". If you haven't taken an intro to abstract algebra, where to begin? The final draft settled on mentioning in passing three examples -- polynomials (leaving out the frightening phrase "higher degree"), the dual numbers and finite fields. We batted about Spec of the dual numbers until something approaching an honest description came out, using "very small" and "infinitesimal distance". As for finite fields, in spite of John's discomfort, I thought the numbers on a clock made a decent first exposure. OK, ( mathbb*p* as a "discrete" world, in contrast to the characteristic 0 classical/continuous world. In another direction, we also added the clause "inspired by the ideas of the French mathematician Jean-Pierre Serre", an acknowledgement of their extraordinary collaboration.

The whole thing is a compromise and I don't want to say Nature is foolish or stupid not to allow more math. The real problem is that such a huge and painful gap has opened up between mathematicians and the rest of the world. I think that Middle and High School math curricula are one large cause of this. If math was introduced as connected to the rest of the world instead of being an isolated exercise, if it was shown to connect to money, to measuring the real world, to physics, chemistry and biology, to optimizing decisions and to writing computer code, fewer students would be turned off. In fact, why not drop separate High School math classes and teach the math as needed in science, civics and business classes? If you think about it, I think you'll agree that this is not such a crazy idea.

##### Comments

We've been having a lot of trouble with scientists, in particular life scientists. They are teaching calculus by radically dumbing it down. E.g. no trig, a half page on the chain rule, . and very weak exams. This is being pushed by the Dean of LS, ostensibly so that math phobic students are not turned off science. The people in charge seem to be ecologists and they don't believe in any math that's not what they use. I suspect these students will be in real trouble when they take physics. I also suspect the readers of Nature think they know all important math and get upset if it's hinted that there's important math they haven't even heard of.

A sad story. How much math do biologists need? I would argue first of all that oscillations are central part of *every* science plus engineering/economics/business (arguably excluding computer science) and one needs the basic tools for describing them -- sines and cosines, all of trig of course, Euler's formula ( e^ *ring* is ever needed. But polynomials *and* varieties have been used in Sturmfels' algebraic statistics and, as Lior Pachter noted (see below), is very effectively used in modeling genome mutation. But evolutionary genomics is one community within biology and John and I figured we needed to throw into the obit a rough definition of a ring.

Jan.2: I received email from Steven Salzberg about the challenge of bridging the gap between math and biology, including a link to a fascinating blog on this gap by Lior Pachter. Pachter details how varieties arise as sets of probabilities consistent with a class of models, an application I was only dimly aware of when writing the obit with John Tate. He then elaborates at length on the many ways in which the culture of mathematicians and of biologists differ, cultures that he straddles at UC Berkeley. As he goes on to say, "The extent to which the two cultures have drifted apart is astonishing" and worse, both sides seem happy to ignore each other. To illustrate this, he cites another side to the situation at UCLA mentioned by Gieseker -- that the math dept is not one of 15 partner departments to UCLA's new "Institute for Quantitative and Computational Biosciences". This split is to their joint detriment and as he says:

The laundry list of differences between biology and math that I aired above can be overwhelming. Real contact between the subjects will be difficult to foster, and it should be acknowledged that it is neither necessary nor sufficient for the science to progress. But wouldn't it be better if mathematicians proved they are serious about biology and biologists truly experimented with mathematics?

David Mumford asks on his blog, "Can one explain schemes to biologists?" in the context of his and John Tate's obituary for Grothendieck being prepared for publication in Nature. He offers a first draft obit which was rejected as too technical, along with a lament about the chasm between math and other scientific fields. Their draft introduces Grothendieck's field of algebraic geometry as follows: "This is the field where one studies the locus of solutions of sets of polynomial equations by combining the algebraic properties of the rings of polynomials with the geometric properties of this locus, known as a variety."

I find it surprising how someone who has worked at the interface of mathematics, applied math and biology for so long was surprised at Nature's reception. Of course this is too technical!

So, I wanted to try to take up Mumford's challenge.

Here's my first draft. Comments welcome.

Algebraic geometry is about solving equations. Not fancy equations involving trigonometric functions and exponentials, but ordinary, garden-variety equations involving ( x, x^2, x^3 ) and so on. Here's one: ( 3x^2 + 4x = 5x^3 + 6 ). Moving everything to the left side, we can write this as ( -6 + 4x + 3x^2 - 5x^3 = 0 ). The thing on the left is a polynomial, that is a sum of terms, each one a multiple of some pure power of

x. Let us call the polynomial ( f(x) ). So, deciding that we'll move everything to the left side, we study equations like ( f(x) = 0 ). We can also study polynomials in two variables, such as ( g(x,y) = x^2 - y^2 ). In this case, the equation ( g(x,y) = 0 ) can be solved by factoring: ( x^2 - y^2 = 0 ) means (( x+y)(x-y)=0 ), so the solutions are either ( y=x ) or ( y=-x ). That is, any point that lies on either the line ( y=x ) or ( y=-x ) (or both) gives a solution: indeed the point (4,-4) is solution since ( 4^2-(-4)^2 = 0 ). The set of solutions looks like a giant, infinite X shape. Some polynomials cannot be factored. For example, if we put ( h(x,y) = x^2 + y^2 - 2 ) then ( h(x,y) = 0 ) means ( x^2 + y^2 = 2 ), and the set of solutions looks like a circle. Note that the giant, infinite X had two pieces, corresponding to the factors of ( f(x,y) ), while the circle has one piece, corresponding to the fact that we could not factor ( h(x,y) ). We could also consider solving more than one equation simultaneously. For example, if we try to solve ( g(x,y) = 0 ) and ( h(x,y) = 0 ), this means that we need to find a point that both lies on the giant, infinite X and the circle ( h(x,y) = 0 ). In total, there are four such points: (1,1), (1,-1), (-1,1), and (-1,-1). So algebraic geometry tries to characterize what the set of solutions to some number of polynomial equations looks like.Polynomials are interesting mathematical objects because, like numbers, you can multiply them. Think of a polynomial

jwith one variable, say, as a rule which transforms a numberxinto a new number ( j(x) ). Then given two polynomials,jandk, we can define the productjkby the rule which transformsxinto the number ( j(x)k(x) ), i.e. the product of ( j(x) ) and ( k(x) ). In mathematical language, we say that they form a ring. You can also add them: ( j+k ) transformsxinto the sum of ( j(x) ) and ( k(x) ). And like with numbers, you get distributivity and other nice properties. In fact, above we found that certain polynomials could be factored, while others could not. This is analogous to the fact that certain whole numbers can be factored, e.g. 6 = 2x3, while other "prime" numbers such as 5 cannot. This algebraic property (being prime or composite) is reflected in the geometry of the solution space: the prime polynomial ( h(x,y) ) had one component in the geometry of its solution set (a circle) while the composite polynomial ( g(x,y) ) had two (the two lines which cross). Algebraic geometry is the study of this interplay. For example, note here that bothgandhwere "degree-two" polynomials, since terms like ( x^2 ) or ( y^2 ) involve the multiplication of two things, like anxwith anxor aywith ay, and two is the maximum number required by any term in the polynomial. When we considered the simultaneous set of solutions togandh, we found four points. Here we meet a demonstration of a mathematical theorem in algebraic geometry: Bezout's theorem says that the number of points equals the product of the degrees, and indeed here we have 4 = 2x2.A function is a machine which takes as input a point in some space and has as output a number. For example, our polynomials

gandhare functions on the plane, since the inputs ( (x,y) ) are points in the plane. The output, such as ( g(2,3) = 2^2 - 3^2 = -5 ), is always a number. Recall that the set of points to whichgassigns the number zero formed a giant X. What if we wanted to talk about functions on that X itself? That is, what if we were interested in assigning a number to each point on that X? In algebraic geometry, we often want to do such a thing. In order to study a space, you might study how it appears inside other spaces (such as the X in the plane) and you might study how other spaces appear inside it (such as the four points inside the X). Now here is one way to consider a function on X. Start with a function on the plane and restrict your inputs to points which lie on X. For example, we could apply the function ( h(x,y) ) to points ( (x,y) ) that lie on X (which is to say, points with ( g(x,y)=0 ) ). That's fine, but then you soon realize that sometimes two different functions on the plane restrict to the same function on X. For instance, if we comparehand ( h+g ), then on the plane they are different but on X they are the same, since ( h+g ) equals ( h+0 ) because on X,gis zero, and ( h+0 ) equalsh). After we impose this notion of sameness, we get a new "ring" of functions, and in general these rings can have interesting properties. For example, consider the function ( j(x,y) = x+y ) as a function on X. Notejis equal to zero along the line from northwest to southeast, butjis nonzero on the other line. Thereforejis not the "zero" function which assigns zero to every point. Likewise, the function ( k(x,y) = x-y ) is nonzero, but is equal to zero along the line from southwest to northeast. Now note that together on X we have ( jk = 0 ). The product of these two nonzero functions is zero when considered as functions on X. This is a very different phenomenon from what we are used to with numbers. With numbers, if the product of two numbers is zero, then one of them must be zero (possibly both). The lesson is that functions can be multiplied just like polynomials. Sometimes, the ring that they define can be interesting in novel ways, such as having the product of two nonzero objects being zero.Recap: we can learn about the geometry of the space of solutions of some polynomial equations by studying their algebraic properties. The relationship between factoring and having multiple components was one example. Bezout's theorem was another. Functions on a space organize into an algebraic structure called a ring, since you can multiply them, and these rings can be more exotic than the rings formed by numbers or by plain old polynomials.

Now here is the crucial insight: we free ourselves from geometry and simply describe a space with its ring of functions. The plane would be described by the polynomials in two variables (omitting, always, the fancy trigonometric functions and such, in the land of algebra). The X would be described by that ring but with

hand ( h+g ) thought of as the same, i.e. withgbeing identified with the zero function. The one-dimensional line would be described by polynomials in one variable. Even a single point can be described in this way! A function on a point must assign to that point a number, so the ring of functions is the ordinary ring of numbers, where multiplication is the usual multiplication. So we may think of each space as providing a generalization of the algebraic structure of ordinary numbers: each space is defined by (or defines) a ring of functions. This construction gives many interesting objects -- the so-called "affine schemes," but algebraic geometry contains yet more.A scheme is a space described locally by a ring of functions. To give a flavor for what this might mean -- particularly the word "locally" -- consider a space which looks more like a Q than an X. Near where the tail of the Q meets the circle, there is a crossing which looks like a miniature X. What that means is that we should be able to zoom in our perspective and describe the points near the crossing as we would describe the X itself. Now in truth giving a notion of "nearness" can be subtle. Up until this point, we haven't relied on distances. For instance, we could have made all the same essential conclusions above using ( x^2 + y^2 - 50 ) instead of ( x^2 + y^2 - 2 ) and the circle

hdescribed would have been five times as large. Thus a scheme is a set of points equipped with a notion of nearness, such that on each "small" region a ring of functions is given. Further, these rings of functions must be compatible when considering the overlap of two regions. What this means is that if a small region A is contained in both region B and in region C, then the functions on A can be considered as restrictions of functions on B or as restrictions of functions on C.That's about it. You take your geometric object (if you have a notion of geometry), look at a "small region" and describe the object by some equations. These equations tell you what the space of functions on the object is (e.g. which polynomials to consider "the same"). And you do this on enough small regions so that the whole object is described. If you want to free yourself from geometry entirely, you must provide a set with a notion of "nearby points," and give a ring (of functions) for each such neighborhood.

In fact, the only thing left to specify here is what we meant at the start by a "number." That is, we have to decide on the set of functions on a point! This choice determines which "numbers" we are allowed to use in our polynomial expressions. We might have meant the real numbers, the rational numbers, complex numbers, the whole numbers, or -- and here it gets deep very fast -- something more exotic. The only thing we really require is that whatever we decide a "number" is, we ensure that multiplication is associative and commutes, like for ordinary numbers.

Why do all this? Having "algebraicized" the problem completely, the power of this approach emerges when geometry breaks down. For example, if you plot the solutions to the equation ( y^2 - x^3 = 0 ) you see a pointy object which doesn't have a tangent line at the origin (0,0). So certain geometric constructions are off limits. However, this kind of space poses no problems in scheme theory. The ring of functions is simply obtained: for example, you need to set two polynomials equal to each other if they differ by ( y^2 - x^3 ), since that is zero on the space.

Obviously, Tate and Mumford were not afforded this much space by Nature, and just as obviously, without constraints they could communicate these ideas, too -- and better. Whatever its origin, the challenge was a good one. Did I meet it?

Eric, this is certainly a simple introduction to some of the ideas needed to explain schemes. But I think that it also illustrates why mathematicians are often unsuccessful in explaining their ideas to other scientists. The reason is that it seems to me to suffer from the mathematicians compulsion to always be 100% precise and complete, defining every concept used. All scientists know what a function is already and the idea of restricting a function to some smaller set does not need to be spelled out in such detail. The second issue is that mathematicians, when giving examples, tend to start with trivial examples instead of going for an example that illustrates best the core idea. In your case, I think the equation ( x^2-y^2=0 ) is just too simple and emphasizing reducible varieties seems to me just distracting. In the version Nature accepted, John and I use your third example, the circle -- a variety certainly known to all scientists -- and say "Algebraic geometry is the field that studies the solutions of sets of polynomial equations by looking at their geometric properties. For instance a circle is the set of solutions of ( x^2+y^2 = 1 ) and in general such a set of points is called a variety." I think the trick is to bootstrap the math on things scientists know, simplifying definitions (Stewart's maxim "Lie a little") and getting to some core non-trivial motivating example if possible.

Jan.5. I received from Jean-Michel Kantor a three part obituary by Michel de Pracontal containing some excellent efforts at explaining Grothendieck's work to lay people. I quote some of his article here. First a quote from Michel Demazure:

But frankly, I was quite disappointed by their struggle to say something meaningful about what schemes and functors are. They start, as John and I finally did, with a circle but discussing how one can look at the integer and rational solutions of the equation of a circle as well as real and complex solutions. This leads them to the following passage where schemes and functors are strangely conflated. I'm not sure why they say a set of equations could have no solutions -- what happened to the *nullstellensatz*? I guess they meant the variety has no points rational over the ground field.

Or, la mathématique étant le pays de la liberté, il n'y a aucune raison de ne pas considérer les solutions d'une équation, ou d'un système d'équations, pour n'importe laquelle des espèces de nombres évoqués ci-dessus. Ce qui enrichit encore considérablement la variété des variétés .

Et c'est là qu'intervient Grothendieck. Rappelons-nous qu'une variété est un objet géométrique, qui représente les solutions d'un système d'équations. Mais il y a des cas où le système n'a pas de solution, de sorte que la variété correspondante n'a pas de points. On ne peut pas la dessiner comme une figure géométrique. Mais peut-on quand même l'étudier ? L'idée de Grothendieck est de généraliser la notion de variété, en passant par les propriétés algébriques, et en "ignorant" les points : "Grothendieck ne se préoccupe pas des points, il les oublie délibérément, explique le mathématicien français Jean-Michel Kantor. Son raisonnement revient à dire : même si j'ai une équation sans solution, je veux pouvoir étudier cet objet donc je vais rassembler toute une série de variétés, sans savoir s'il y a des points, et je vais construire un objet plus général, qui inclut tous les cas possibles."

Cet objet plus général s'appelle un "schéma". L'intérêt des schémas est qu'ils élargissent le cadre de l'algèbre, tout en conservant les propriétés les plus importantes. Les schémas permettent de traiter dans le même cadre le monde des nombres entiers et celui des grandeurs continues, répondant aux questions soulevées par Diophante il y a 1 800 ans. Ainsi, avec les schémas, notre cercle peut être étudié aussi bien en considérant les nombres entiers que les réels ou un autre type de nombres.

## References

Tawfik, D. S. *et al.* Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. *Mayo Clin. Proc.* **93**, 1571–1580 (2018).

West, C. P., Tan, A. D., Habermann, T. M., Sloan, J. A. & Shanafelt, T. D. Association of resident fatigue and distress with perceived medical errors. *JAMA* **302**, 1294–1300. https://doi.org/10.1001/jama.2009.1389 (2009).

Schläpfer, J. & Wellens, H. J. Computer-interpreted electrocardiograms. *J. Am. Coll. Cardiol.* **70**, 1183–1192. https://doi.org/10.1016/j.jacc.2017.07.723 (2017).

Hannun, A. Y. *et al.* Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. *Nat. Med.* **25**, 65–69. https://doi.org/10.1038/s41591-018-0268-3 (2019).

Tan, S. *et al.* Icentia11K: An Unsupervised Representation Learning Dataset for Arrhythmia Subtype Discovery (2019). arXiv:arXiv:1910.09570

Deng, J. *et al.* ImageNet: A large-scale hierarchical image database. in *CVPR09* (2009).

Clifford, G. D. *et al.* AF Classification from a short single lead ECG recording: the PhysioNet/computing in cardiology challenge 2017. *Comput. Cardiol.* (2017).

Goldberger, A. L. *et al.* PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. *Circulation* **101**, e215–e220. https://doi.org/10.1161/01.CIR.101.23.e215 (2000).

He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning (2020). arXiv:1911.05722.

Grill, J.-B. *et al.* Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (2020). arXiv:2006.07733.

Faust, O., Hagiwara, Y., Hong, T. J., Lih, O. S. & Acharya, U. R. Deep learning for healthcare applications based on physiological signals: a review. *Comput. Methods Prog. Biomed.* **161**, 1–13. https://doi.org/10.1016/j.cmpb.2018.04.005 (2018).

Hong, S., Zhou, Y., Shang, J., Xiao, C., & Sun, J. Opportunities and Challenges of Deep Learning Methods for Electrocardiogram Data: A Systematic Review (2020). arXiv:2001.01550.

Acharya, U. R. *et al.* Application of deep convolutional neural network for automated detection of myocardial infarction using ecg signals. *Inf. Sci.* **415–416**, 190–198. https://doi.org/10.1016/j.ins.2017.06.027 (2017).

Jun, T. J. *et al.* ECG Arrhythmia Classification Using a 2-D Convolutional Neural Network (2018). arXiv:1804.06812.

Mousavi, S., Afghah, F. & Acharya, U. R. Inter- and Intra-Patient ECG Heartbeat Classification for Arrhythmia Detection: A Sequence to Sequence Deep Learning Approach (2019). arXiv:1812.07421.

Attia, Z. I. *et al.* An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. *Lancet* **394**, 861–867. https://doi.org/10.1016/s0140-6736(19)31721-0 (2019).

Rajpurkar, P., Hannun, A. Y., Haghpanahi, M., Bourn, C., & Ng, A. Y. Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks (2017). arXiv:1707.01836.

Ribeiro, A. H. *et al.* Automatic Diagnosis of the 12-Lead ECG Using a Deep Neural Network (2019). arXiv:1904.01949.

Mousavi, S., Afghah, F., Razi, A., & Acharya, U. R. ECGNET: Learning where to attend for detection of atrial fibrillation with deep visual attention (2019). arXiv:1812.07422.

Zihlmann, M., Perekrestenko, D., & Tschannen, M. Convolutional Recurrent Neural Networks for Electrocardiogram Classification (2018). arXiv:1710.06122.

He, K., Zhang, X., Ren, S., & Sun, J. Deep Residual Learning for Image Recognition (2015). arXiv:1512.03385.

He, K., Zhang, X., Ren, S., & Sun, J. Identity Mappings in Deep Residual Networks (2016). arXiv:1603.05027.

Vaswani, A. *et al.* Attention Is All You Need (2017). arXiv:1706.03762.

Bahdanau, D., Cho, K., & Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate (2016). arXiv:1409.0473.

Xu, K. *et al.* Show, Attend and Tell: Neural Image Caption Generation with Visual Attention (2016). arXiv:1502.03044.

Xiong, Z. *et al.* ECG signal classification for the detection of cardiac arrhythmias using a convolutional recurrent neural network. *Physiol. Meas.* **39**, 094006 (2018).

Kachuee, M., Fazeli, S., & Sarrafzadeh, M. ECG heartbeat classification: a deep transferable representation. in *2018 IEEE International Conference on Healthcare Informatics (ICHI)*https://doi.org/10.1109/ichi.2018.00092 (2018).

Strodthoff, N., Wagner, P., Schaeffter, T., & Samek, W. Deep Learning for ECG Analysis: Benchmarks and Insights from PTB-XL (2020). arXiv:2004.13701.

Salem, M., Taheri, S., & Shiun-Yuan, J. ECG Arrhythmia Classification Using Transfer Learning from 2-Dimensional Deep CNN Features (2018). arXiv:1812.04693.

Wu, Y., Yang, F., Liu, Y., Zha, X., & Yuan, S. A Comparison of 1-D and 2-D Deep Convolutional Neural Networks in ECG Classification (2018). arXiv:1810.07088.

Rahhal, M. A. *et al.* Deep learning approach for active classification of electrocardiogram signals. *Inf. Sci.* **345**, 340–354. https://doi.org/10.1016/j.ins.2016.01.082 (2016).

Xia, Y. *et al.* An automatic cardiac arrhythmia classification system with wearable electrocardiogram. *IEEE Access* **6**, 16529–16538 (2018).

Rajan, D., Beymer, D., & Narayan, G. Generalization Studies of Neural Network Models for Cardiac Disease Detection Using Limited Channel ECG (2019). arXiv:1901.03295.

van den Oord, A., Li, Y., & Vinyals, O. Representation Learning with Contrastive Predictive Coding (2019). arXiv:1807.03748.

Pan, J. & Tompkins, W. J. A Real-Time QRS Detection Algorithm. *IEEE Trans. Biomed. Eng.* **32**, 230–236 (1985).

Gutmann, M., & Hyvärinen, A. Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. in Teh, Y. W. & Titterington, M. (eds.) *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, vol. 9 of Proceedings of Machine Learning Research*, 297–304 (PMLR, Chia Laguna Resort, Sardinia, Italy, 2010).

Trinh, T. H., Luong, M.-T., & Le, Q. V. Selfie: Self-Supervised Pretraining for Image Embedding (2019). arXiv:1906.02940.

Kingma, D. P., & Ba, J. A Method for Stochastic Optimization (Adam, 2017). arXiv:1412.6980.

Liu, F. *et al.* An open access database for evaluating the algorithms of electrocardiogram rhythm and morphology abnormality detection. *J. Med. Imaging Health Inf.* **8**, 1368–1373. https://doi.org/10.1166/jmihi.2018.2442 (2018).

Perez Alday, E. A. *et al.* Classification of 12-lead ECGs: The PhysioNet—Computing in Cardiology Challenge. https://doi.org/10.13026/F4AB-0814 (2020).

## Using the Vertical Line Test

As we have seen in some examples above, we can represent a function using a graph. Graphs display a great many input-output pairs in a small space. The visual information they provide often makes relationships easier to understand. By convention, graphs are typically constructed with the input values along the horizontal axis and the output values along the vertical axis.

The most common graphs name the input value (x) and the output (y), and we say (y) is a function of (x), or (y=f(x)) when the function is named (f). The graph of the function is the set of all points ((x,y)) in the plane that satisfies the equation (y=f(x)). If the function is defined for only a few input values, then the graph of the function is only a few points, where the x-coordinate of each point is an input value and the y-coordinate of each point is the corresponding output value. For example, the black dots on the graph in Figure (PageIndex<10>) tell us that (f(0)=2) and (f(6)=1). However, the set of all points ((x,y)) satisfying (y=f(x)) is a curve. The curve shown includes ((0,2)) and ((6,1)) because the curve passes through those points

Figure (PageIndex<10>): Graph of a polynomial.

The vertical line test can be used to determine whether a graph represents a function. If we can draw any vertical line that intersects a graph more than once, then the graph does not define a function because a function has only one output value for each input value. See Figure (PageIndex<11>)**.**

Figure (PageIndex<11>): Three graphs visually showing what is and is not a function.

Howto: Given a graph, use the vertical line test to determine if the graph represents a function

- Inspect the graph to see if any vertical line drawn would intersect the curve more than once.
- If there is any such line, determine that the graph does not represent a function.

Example (PageIndex<12>): Applying the Vertical Line Test

Which of the graphs in Figure (PageIndex<12>) represent(s) a function (y=f(x))?

Figure (PageIndex<12>): Graph of a polynomial (a), a downward-sloping line (b), and a circle (c).

If any vertical line intersects a graph more than once, the relation represented by the graph is not a function. Notice that any vertical line would pass through only one point of the two graphs shown in parts (a) and (b) of Figure (PageIndex<12>). From this we can conclude that these two graphs represent functions. The third graph does not represent a function because, at most x-values, a vertical line would intersect the graph at more than one point, as shown in Figure (PageIndex<13>).

Figure (PageIndex<13>): Graph of a circle.

Does the graph in Figure (PageIndex<14>) represent a function?

Figure (PageIndex<14>): Graph of absolute value function. **Answer**

## Types of EKG Tests

Besides the standard EKG, your doctor may recommend other kinds:

**Holter monitor.** It's a portable EKG that checks the electrical activity of your heart for 1 to 2 days, 24-hours a day. Your doctor may suggest it if they suspect you have an abnormal heart rhythm, you have palpitations, or don't have enough blood flow to your heart muscle.

Like the standard EKG, it's painless. The electrodes from the monitor are taped to your skin. Once they're in place, you can go home and do all of your normal activities except shower. Your doctor will ask you to keep a diary of what you did and any symptoms you notice.

**Event monitor.** Your doctor may suggest this device if you only get symptoms now and then. When you push a button, it will record and store your heart's electrical activity for a few minutes. You may need to wear it for weeks or sometimes months.

Each time you notice symptoms, you should try to get a reading on the monitor. The info is sent on the phone to your doctor, who will analyze it.