Ask Question
16 September, 08:44

Note that f (x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f (x∗) = 0. Nonetheless, we can find an interval [a, b] such that f (a) < 0 < f (b) : just choose a = - 1, b = 1. Why can't we use the intermediate value theorem to conclude that f has a zero in the interval [-1, 1]?

+5
Answers (1)
  1. 16 September, 10:50
    0
    Answer: Hello there!

    Things that we know here:

    f (x) is defined for every real x

    f (a) < 0 < f (b), where we assume a = - 1 and b = 1

    and the problem asks: "Why can't we use the intermediate value theorem to conclude that f has a zero in the interval [-1, 1]?

    The theorem says:

    if f is continuous in the interval [a, b], and f (a) < u < f (b), there exist a number c in the interval [a, b], such f (c) = u

    Notice that the function needs to be continuous in the interval, and in this case, we don't know if f (x) is continuous or not, so we can't apply this theorem.
Know the Answer?
Not Sure About the Answer?
Get an answer to your question ✅ “Note that f (x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f (x∗) = 0. Nonetheless, we can find an ...” in 📙 Mathematics if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions.
Search for Other Answers