逸之

V1

2022/01/07阅读:46主题:红绯

O(n)

Big-Oh Notation 大 oh 符号

What does it mean to be efficient? Cornell professors Bobby Kleinberg and Eva Tardos have a wonderful explanation in chapter 2 of their textbook, Algorithm Design (2006). This appendix is a summary and reinterpretation of that explanation from a functional programming perspective. The ultimate answer will be that an algorithm is efficient if its worst-case running time on input size is for some constant . But it will take us several steps to build up to that definition.

效率意味着什么?康奈尔大学的教授 Bobby Kleinberg 和 Eva Tardos 在他们的教材《算法设计》(2006)的第二章有一个精彩的解释。这个附录是从函数式编程的角度对这个解释的总结和翻唱。最终的答案是,如果一个算法在输入大小 n 上的最坏情况运行时间对于某个常数 d 是 o (nd) ,那么该算法是有效的。但是我们需要几个步骤来建立这个定义。

Algorithms and Efficiency, Attempt 1 算法和效率,第一次尝试

A naive attempt at defining efficiency might proceed as follows:

在界定效率方面的一种幼稚的尝试可以这样进行:

Attempt 1: An algorithm is efficient if, when implemented, it runs in a small amount of time on particular input instances.

尝试1: 如果一个算法在实现时在特定的输入实例上以很少的时间运行,那么它是有效的。

But there are many problems with that definition, such as:

但是这个定义有很多问题,比如:

  • Inefficient algorithms can run quickly on small test cases.

    低效的算法可以在小测试用例上快速运行。

  • Fast processors and optimizing compilers can make inefficient algorithms run quickly.

    快速处理器和优化编译器可以使低效算法快速运行。

  • Efficient algorithms can run slowly when coded sloppily.

    有效的算法可以运行缓慢时,编码马虎。

  • Some input instances are harder than others.

    有些输入实例比其他实例更难。

  • Efficiency on small inputs doesn't imply efficiency on large inputs.

    小投入的效率并不意味着大投入的效率。

  • Some clients can afford to be more patient than others; quick for me might be slow for you.

    有些客户可以比其他客户更有耐心; 对我来说快一点可能对你来说慢一点。

Lesson 1: One lesson learned from that attempt is: time measured by a clock is not the right metric for algorithm efficiency. We need a metric that is reasonably independent of the hardware, compiler, other software that is running, etc. Perhaps a good metric would be to give up on time and instead count the number of steps taken during evaluation.

第一课: 从这次尝试中得到的一个教训是: 时钟测量的时间并不是衡量算法效率的正确指标。我们需要一个相当独立于硬件、编译器和其他正在运行的软件的度量标准。也许一个好的衡量标准是放弃时间,而是计算在评估期间采取的步骤的数量。

But, now we have a new problem: how should we define a "step"? It needs to be something machine independent. It ought to somehow represent a primitive unit of computation. There's a lot of flexibility. Here are some common choices:

但是,现在我们有了一个新的问题: 我们应该如何定义一个“步骤”?它需要一些独立于机器的东西。它应该以某种方式表示计算的一个原始单位。有很大的灵活性。以下是一些常见的选择:

  • In pseudocode, we might think of a step as being a single line.

    在伪代码中,我们可能认为一个步骤是一行。

  • In imperative languages, we might count assignments, array indexes, pointer dereferences, and arithmetic operations as steps.

    在命令式语言中,我们可以将赋值、数组索引、指针取消推理和算术运算计算为步骤。

  • In OCaml, we could count function or operator application, let bindings, and choosing a branch of an if or match as steps.

    在 OCaml 中,我们可以将函数或运算符应用程序、 let 绑定和选择 if 或 match 的分支计算为步骤。

In reality, all of those "steps" could really take a different amount of time. But in practice, that doesn't really matter much.

实际上,所有这些“步骤”实际上需要的时间是不同的。但实际上,这并不重要。

Lesson 2: Another lesson we learned from attempt 1 was: running time on a particular input instance is not the right metric. We need a metric that can predict running time on any input instance. So instead of using the particular input (e.g., a number, or a matrix, or a text document), it might be better to use the size of the input (e.g., the number of bits it takes to represent the number, or the number of rows and columns in the matrix, or the number of bytes in the document) as the metric.

第二课: 我们从第一次尝试中学到的另一个教训是: 在特定的输入实例上运行时间不是正确的度量标准。我们需要一个可以预测任何输入实例的运行时间的度量。因此,与其使用特定的输入(例如,一个数字,或者一个矩阵,或者一个文本文档) ,不如使用输入的大小(例如,表示数字所需的位数,或者矩阵中的行和列数,或者文档中的字节数)作为度量。

But again we have a new problem: how to define "size"? As in the examples we just gave, size should be some measure of how big an input is compared to other inputs. Perhaps the most common representation of size in the context of data structures is just the number of elements maintained by the data structure: the number of nodes in a list, or the number of nodes and edges in a graph, etc.

但是,我们又遇到了一个新问题: 如何定义“规模”?正如我们刚才给出的例子一样,规模应该是一个输入与其他输入相比有多大的度量标准。也许数据结构上下文中最常见的大小表示形式就是数据结构维护的元素数量: 列表中节点的数量,或者图中节点和边的数量,等等。

Could an algorithm run in a different amount of time on two inputs of the same "size"? Sure. For example, multiplying a matrix by all zeroes might be faster than multiplying by arbitrary numbers. But in practice, size matters more than exact inputs.

一个算法在同一“大小”的两个输入上运行的时间是否不同?当然。例如,一个矩阵乘以所有的零可能比乘以任意的数字要快。但实际上,规模比精确输入更重要。

Lesson 3: A third lesson we learned from attempt 1 was that "small amount of time" is too relative a term. We want a metric that is reasonably objective, rather than relying on subjective notions of what constitutes "small".

第三个教训: 我们从第一次尝试中学到的第三个教训是,“少量的时间”这个词太相对了。我们需要一个合理客观的衡量标准,而不是依赖于什么是“小”的主观概念。

One sort-of-okay idea would be that an efficient algorithm needs to beat brute-force search. That means enumerating all answers one-by-one, checking each to see whether it's right. For example, a brute-force sorting algorithm would enumerate every possible permutation of a list, checking to see whether it's a sorted version of the input list. That's a terrible sorting algorithm! Certainly quicksort beats it.

一种可以接受的想法是,一个高效的算法需要打败暴力搜索法。这意味着逐个列举所有答案,检查每个答案是否正确。例如,一个蛮力排序算法文件会列举列表的每一个可能的排列,检查它是否是输入列表的排序版本。这是一个可怕的排序算法!当然快速排序胜过它。

Brute-force search is the simple, dumb solution to nearly any algorithmic problem. But it requires enumeration of a huge space. In fact, an exponentially-sized space. So a better idea would be doing less than exponential work. What's less than exponential (e.g., )? One possibility is polynomial (e.g., ).

对于几乎所有的暴力搜索法计算问题来说,这是一个简单而愚蠢的解决方案。但它需要列举一个巨大的空间。事实上,一个指数大小的空间。所以一个更好的想法是做比指数函数更少的工作。什么是小于指数的(例如,2n) ?一种可能是多项式(例如 n2)。

An immediate objection might be that polynomials come in all sizes. For example, is way bigger than . And some non-polynomials, such as , might do an adequate job of beating exponentials. But in practice, polynomials do seem to work fine.

一个直接的反对意见可能是多项式有各种大小。例如,n100比 n2大得多。而一些非多项式,比如 n1 + . 02(log n) ,可能足以击败指数。但实际上,多项式似乎确实能很好地工作。

Algorithms and Efficiency, Attempt 2 算法和效率,第二次尝试

Combining lessons 1 through 3 from Attempt 1, we have a second attempt at defining efficiency:

结合第一次尝试的经验教训1到3,我们有第二次尝试来定义效率:

Attempt 2: An algorithm is efficient if its maximum number of execution steps is polynomial in the size of its input.

尝试2: 如果一个算法的最大执行步骤数在其输入的大小中是多项式的,那么该算法是有效的。

Note how all three ideas come together there: steps, size, polynomial.

请注意这三个概念是如何结合在一起的: 步骤、大小、多项式。

But if we try to put that definition to use, it still isn't perfect. Coming up with an exact formula for the maximum number of execution steps can be insanely tedious. For example, in one other algorithm textbook, the authors develop this following polynomial for the number of execution steps taken by a pseudo-code implementation of insertion sort:

但是如果我们尝试使用这个定义,它仍然不是完美的。想出一个最大执行步骤数的精确公式可能会令人感到无比乏味。例如,在另一本算法教科书中,作者根据插入排序伪代码实现的执行步骤数开发了下面的多项式:

No need for us to explain what all the variables mean. It's too complicated. Our hearts go out to the poor grad student who had to work out that one!

我们不需要解释所有的变量意味着什么。这太复杂了。我们非常同情那个可怜的研究生,他不得不解决这个问题!

That formula for running time of insertion sort is from *Introduction to
Algorithms*, 3rd edition, 2009, by Cormen, Leiserson, Rivest, and Stein. We
aren't making fun of them. They would also tell you that such formulas are too
complicated.

Precise execution bounds like that are exhausting to find and somewhat meaningless. If it takes 25 steps in Java pseudocode, but compiled down to RISC-V would take 250 steps, is the precision useful?

像那样的精确执行界限是令人筋疲力尽的,而且有些毫无意义。如果它在 Java 伪代码中需要25个步骤,但是编译成 RISC-V 需要250个步骤,那么精确性有用吗?

In some cases, yes. If you're building code that flies an airplane or controls a nuclear reactor, you might actually care about precise, real-time guarantees.

在某些情况下,是的。如果你正在编写驾驶飞机或控制核反应堆的代码,你可能实际上关心的是精确的、实时的保证。

But otherwise, it would be better for us to identify broad classes of algorithms with similar performance. Instead of saying that an algorithm runs in

但是,如果不是这样,那么我们最好识别具有类似性能的大类算法。而不是说运行一个算法

steps, how about just saying it runs in steps? That is, we could ignore the low-order terms and the constant factor of the highest-order term.

斯泰普斯,直接说它是 n2步骤怎么样?也就是说,我们可以忽略低阶项和最高阶项的常数因子。

We ignore low-order terms because we want to THINK BIG. Algorithm efficiency is all about explaining the performance of algorithms when inputs get really big. We don't care so much about small inputs. And low-order terms don't matter when we think big. The following table shows the number of steps as a function of input size N, assuming each step takes 1 microsecond. "Very long" means more than the estimated number of atoms in the universe.

我们忽略低阶条款,因为我们想想大。算法的效率完全在于解释当输入变得非常大时算法的性能。我们不太关心小投入。当我们想得很大的时候,低阶条款并不重要。下表显示了步骤数与输入大小 n 的函数关系,假设每个步骤占用1微秒。“很长”意味着超过宇宙中原子的估计数量。

N=10 10 < 1 sec < 1秒 < 1 sec < 1秒 < 1 sec < 1秒
N=100 100 < 1 sec < 1秒 < 1 sec < 1秒 1 sec 1秒
N=1,000 1,000 < 1 sec < 1秒 1 sec 1秒 18 min 18分钟
N=10,000 10,000 < 1 sec < 1秒 2 min 2分钟 12 days 12天
N=100,000 100,000 < 1 sec < 1秒 3 hours 3小时 32 years 32年
N=1,000,000 1000000 1 sec 1秒 12 days 12天 104 years 104年

As you can see, when inputs get big, there's a serious difference between and and . We might as well ignore low-order terms, because they are completely dominated by the highest-order term when we think big.

正如你所看到的,当输入变大时,n3和 n2和 n 之间有很大的区别。我们不妨忽略低阶术语,因为当我们想大的时候,它们完全被高阶术语所支配。

What about constant factors? My current laptop might be 2x faster (that is, a constant factor of 2) than the one I bought several years ago, but that's not an interesting property of the algorithm. Likewise, steps in pseduocode might be steps in assembly (that is, a constant factor of 1000), but it's again not an interesting property of the algorithm. So, should we really care if one algorithm takes 2x or 1000x longer than another, if it's just a constant factor?

那么常量因素呢?我现在的笔记本电脑可能比我几年前买的那台快两倍(也就是说,常数是2) ,但这并不是算法的有趣属性。同样,pseduocode 中的1.62 n2步骤可能是汇编中的1620n2步骤(即常数因子为1000) ,但这也不是算法的有趣属性。那么,我们真的应该关心一个算法是比另一个算法多花2倍还是1000倍的时间,如果它只是一个常数的话?

The answer is: maybe. Performance tuning in real-world code is about getting the constants to be small. Your employer might be really happy if you make something run twice as fast! But that's not about the algorithm. When we're measuring algorithm efficiency, in practice the constant factors just don't matter much.

答案是: 可能。现实世界代码中的性能调优就是让常量变得很小。如果你能让某个东西跑得快一倍,你的老板可能会非常高兴!但这与算法无关。当我们测量算法的效率时,实际上常量因子并不重要。

So all that argues for having an imprecise abstraction to measure running time. Instead of , we can just write . Imprecise abstractions are nothing new to you. You might write to imprecisely abstract a quantity within 1. In computer science, you already know that we use Big-Oh notation as an imprecise abstraction: is .

所有这些都支持使用不精确的抽象来度量运行时间。我们可以直接写 n 2,而不是1.62 n 2 + 3.5 n + 8。不精确的抽象对你来说并不新鲜。你可以写 ± 1来模糊地抽象出1以内的一个量。在计算机科学中,你已经知道我们使用大 -oh 符号作为一个不精确的抽象: 1.62 n2 + 3.5 n + 8是 o (n2)。

Big-Ell Notation Big-Ell 符号

Before reviewing Big-Oh notation, let's start with something simpler that you might not have seen before: Big-Ell notation.

在复习 Big-Oh 符号之前,让我们先从一些你们可能从未见过的简单符号开始: Big-Ell 符号。

Big-Ell is an imprecise abstraction of natural numbers less than or equal to another number, hence the L. It's defined as follows:

Big-Ell 是对小于或等于另一个数的自然数的不精确抽象,因此定义为 l:

where and are natural numbers. That is, represents all the natural numbers less than or equal to . For example,

.

其中 m 和 n 是自然数。也就是说,l (n)表示所有小于或等于 n 的自然数。例如,l (5) = {0,1,2,3,4,5}。

Could you do arithmetic with Big-Ell? For example, what would be? It's not a well-posed question, to be honest: addition is an operation we think of being defined on integers, not sets of integers. But a reasonable interpretation of could be doing the addition for each element in the set, yielding . Note that

is a proper subset of , and the latter is . So we could say that . We could even say that , but it's not tight: the former subset relation included the fewest possible extra elements, whereas the latter was loose by needlessly including extra.

你能用 Big-Ell 做算术题吗?例如,1 + l (5)是什么?说实话,这不是一个很好的问题: 加法是一种我们认为是在整数上定义的操作,而不是整数集。但是对1 + {0,1,2,3,4,5}的合理解释可以对集合中的每个元素进行加法,生成{1,2,3,4,5,6}。注意{1,2,3,4,5,6}是{0,1,2,3,4,5,6}的一个恰当子集,后者是 l (6)。所以我们可以说1 + l (5) something l (6)。我们甚至可以说1 + l (5) something l (7) ,但它并不紧密: 前者的子集关系包含了最少的可能的额外元素,而后者由于不必要地包含额外元素而松散。

For more about Big Ell, see Concrete Mathematics, chapter 9, 1989, by Graham, Knuth, and Patashnik.

更多关于 Big Ell 的信息,请参见《混凝土数学》 ,第9章,1989,Graham,Knuth,and Patashnik 著。

Big-Oh Notation 大 oh 符号

If you understand Big-Ell, and you understand functional programming, here's some good news: you can easily understand Big-Oh.

如果你理解 Big-Ell,并且你理解函数式编程,这里有一些好消息: 你可以很容易地理解 Big-Oh。

Let's build up the definition of Big-Oh in a few steps. We'll start with version 1, which we'll write as . It's based on :

让我们通过几个步骤来建立“大哦”的定义。我们从第一版开始,我们称之为 O1。它是基于 l:

  • represents any natural number that is less than or equal to a natural number .

    L (n)表示任何小于或等于自然数 n 的自然数。

  • represents any natural function that is less than or equal to a natural function .

    O1(g)表示任何小于或等于自然函数 g 的自然函数。

A natural function is just a function on natural numbers; that is, its type is .

自然函数就是自然数上的函数,也就是说,它的类型是 n → n。

All we do with is upgrade from natural numbers to natural functions. So Big-Oh version 1 is just the higher-order version of Big-Ell. How about that!

我们对 o1所做的一切就是从自然数升级到自然函数。所以 Big-Oh 版本1只是 Big-Ell 的高阶版本。怎么样!

Of course, we need to work out what it means for a function to be less than another function. Here's a reasonable formalization:

当然,我们需要知道一个函数小于另一个函数意味着什么。这里有一个合理的形式化:

Big-Oh Version 1:

Big-Oh 版本1: O1(g) = { f | something n.f (n)≤ g (n)}

For example, consider the function that doubles its input. In math textbooks, that function might be written as . In OCaml we would write let g n = 2 * n or let g = fun n -> 2 * n or just anonymously as fun n -> 2 * n. In math that same anonymous function would be written with lambda notation as . Proceeding with lambda notation, we have:

例如,考虑使输入增加一倍的函数。在数学教科书中,这个函数可以写成 g (n) = 2n。在 OCaml 中,我们写作 let g n = 2 * n 或者 let g = fun n-> 2 * n 或者匿名写作 fun n-> 2 * n。在数学中,同样的匿名函数可以用 λ 表示法写成 λn. 2 n。从 lambda 表示法出发,我们有:

and therefore

因此

  • ,

    (λn.n)∈ O1(λn. 2 n) ,

  • , but

    (λnn2)∈ O1(λn. 2 n) ,但(λnn2)∈ O1(λn. 2 n)

  • .

    (λn. 3 n) something O1(λn. 2 n).

Next, recall that in defining algorithmic efficiency, we wanted to ignore constant factors. does not help us with that. We'd really like for all these functions:

接下来,回想一下在定义演算法效率时,我们想要忽略不变的因素。O1在这方面帮不了我们。我们真的希望所有这些功能:

  • (λn. 2 n)

  • (λn. 3 n)

to be in .

在 o (λn.n)中。

Toward that end, let's define to ignore constant factors:

为了达到这个目的,让我们定义 o2来忽略不变的因素:

Big-Oh Version 2:

Big-Oh 版本2: O2(g) = { f | something c > 0 something n.f (n)≤ cg (n)}

That existentially-quantified positive constant lets us "bump up" the function to whatever constant factor we need. For example,

存在量化的正常数 c 让我们可以把函数 g“提升”到任何我们需要的常数因子。比如说,

and therefore , because

if we take , or , or any larger .

因此(λn. 3 n3)∈ O2(λnn3) ,因为3n3≤ cn3如果我们取 c = 3,或 c = 4,或任意大于 c。

Finally, recall that we don't care about small inputs: we want to THINK BIG when we analyze algorithmic efficiency. It doesn't matter whether the running time of an algorithm happens to be a little faster or a little slower for small inputs. In fact, we could just hardcode a lookup table for those small inputs if the algorithm is too slow on them! What matters really is the performance on big-sized inputs.

最后,回想一下,我们并不关心小的输入: 我们在分析演算法效率时想要大的思考。对于较小的输入,算法的运行时间是稍微快一点还是稍微慢一点并不重要。事实上,如果算法对这些小输入太慢,我们可以硬编码一个查找表!真正重要的是在大规模投入方面的表现。

Toward that end, let's define to ignore small inputs:

为了达到这个目的,让我们定义 o3来忽略小的输入:

Big-Oh Version 3:

大 -oh 版本3: O3(g) = { f | something c > 0 something n0 > 0 something n ≥ n0.f (n)≤ cg (n)}

That existentially quantified positive constant lets us "ignore" all inputs of that size or smaller. For example,

这个存在量化的正常数 n0让我们可以“忽略”所有这种大小或更小的输入,

and therefore , because

if we take and . Note how we get to ignore the fact that is temporarily a little too big at by picking . That's the power of ignoring "small" inputs.

因此(λn. 2 n)∈ O3(λnn2) ,因为如果我们取 c = 2和 n0 = 2,2n ≤ cn2。注意我们如何忽略这样一个事实,即当 n = 1时,λn. 2 n 暂时有点过大。这就是忽略“小”输入的力量。

Big-Oh, Finished 大-哦,完成了

Version 3 is the right definition of Big-Oh. We repeat it here, for real:

第三版是“大哦”的正确定义,我们在这里重复一遍,真的:

Big-Oh:

Big-Oh: o (g) = { f | something c > 0 something n0 > 0 something n ≥ n0.f (n)≤ cg (n)}

That's the final, important version you should know. But don't just memorize it. If you understand the derivation we gave here, you'll be able to recreate it from scratch anytime you need it.

这是你应该知道的最终的,重要的版本。但不要只是死记硬背。如果您理解了我们在这里给出的推导过程,就可以在任何需要的时候从头开始重新创建它。

Big-Oh is called an asymptotic upper bound. If , then is at least as efficient as , and might be more efficient.

大 -oh 称为渐近上界。如果 f ∈ o (g) ,则 f 至少和 g 一样有效,而且可能更有效。

Big-Oh Notation Warnings 大 oh 标记警告

Warning 1. Because it's an upper bound, we can always inflate a Big-Oh statement: for example, if , then also , and

, etc. But our goal is always to give tight upper bounds, whether we explicitly say that or not. So when asked what the running time of an algorithm is, you must always give the tightest bound you can with Big-Oh.

警告1。由于它是一个上界,我们总是可以膨胀一个 Big-Oh 语句: 例如,如果 f ∈ o (n2) ,那么 f ∈ o (n3) ,f ∈ o (2n)等。但我们的目标始终是给出严格的上限,不管我们是否明确这样说。因此,当被问到一个算法的运行时间是多少时,你必须总是尽你所能地使用 Big-Oh。

Warning 2. Instead of , most authors instead write . They don't really mean applied to . They mean a function parameterized on input but not yet applied. This is badly misleading and generally a result of not understanding anonymous functions. Moral of that story: more people need to study functional programming.

警告2。大多数作者不写 o (g) = { f | ... } ,而是写 o (g (n)) = { f (n) | ... }。它们并不真正意味着 g 应用于 n。它们意味着在输入 n 上参数化但尚未应用的函数 g。这是一种严重的误导,通常是不理解匿名函数的结果。这个故事的寓意是: 更多的人需要学习函数式编程。

Warning 3. Instead of nearly all authors write . This is a hideous and inexcusable abuse of notation that should never have been allowed and yet has permanently infected the computer science consciousness. The standard defense is that here should be read as "is" not as "equals". That is patently ridiculous, and even those who make that defense usually have the good grace to admit it's nonsense. Sometimes we become stuck with the mistakes of our ancestors. This is one of those times. Be careful of this "one-directional equality" and, if you ever have a chance, teach your (intellectual) children to do better.

警告3。几乎所有的作者都用2n = o (n2)来代替 λn. 2 n ∈ o (λnn2)。这是一个可怕的、不可原谅的符号滥用,它本不应该被允许,却已经永久地感染了计算机科学的意识。标准的辩护是 = 这里应该被理解为“ is”而不是“ equals”。这显然是荒谬的,即使是那些做出这种辩护的人通常也很有风度地承认这是无稽之谈。有时候我们会因为祖先的错误而陷入困境。这就是其中之一。小心这种“单向平等”,如果你有机会,教你的(智力)孩子做得更好。

Algorithms and Efficiency, Attempt 3 算法和效率,第三次尝试

Let's review. Our first attempt at defining efficiency was:

让我们回顾一下,我们首次尝试定义效率是:

Attempt 1: An algorithm is efficient if, when implemented, it runs in a small amount of time on particular input instances.

尝试1: 如果一个算法在实现时在特定的输入实例上以很少的时间运行,那么它是有效的。

By replacing time with steps, particular instances with input size, and small with polynomial, we improved that to:

通过用步骤替换时间,用输入大小替换特定实例,用多项式替换小实例,我们将其改进为:

Attempt 2: An algorithm is efficient if its maximum number of execution steps is polynomial in the size of its input.

尝试2: 如果一个算法的最大执行步骤数在其输入的大小中是多项式的,那么该算法是有效的。

And that's really a pretty good definition. But using Big-Oh notation to make it a little more concrete, we can produce our third and final attempt:

这确实是一个很好的定义。但是使用大 -oh 符号使它更具体一点,我们可以产生我们的第三次也是最后一次尝试:

Attempt 3: An algorithm is efficient if its worst-case running time on input size is for some constant .

尝试3: 一个算法是有效的,如果它的最坏情况下运行时间的输入大小 n 是 o (nd)为某个常数 d。

By "worst-case running time" we mean the same thing as "maximum number of execution steps", just expressed in different and probably more common words. The worst-case is when execution takes the longest. "Time" is a common euphemism here for execution steps, and is used to emphasize we're thinking about how long a computation takes.

所谓“最坏情况下的运行时间”,我们指的是与“最大执行步骤数”相同的东西,只是用不同的、可能更常用的词来表示。最糟糕的情况是执行时间最长。“时间”在这里是执行步骤的常见委婉说法,用来强调我们在考虑计算需要多长时间。

Space is the most common other feature of efficiency to consider. Algorithms can be more or less efficient at requiring constant or linear space, for example. You're already familiar with that from tail recursion and lists in OCaml.

空间是效率最常见的另一个要考虑的特征。例如,算法在需要常数或线性空间时或多或少是有效的。您已经熟悉了尾部递归和 OCaml 中的列表。

分类:

后端

标签:

后端

作者介绍

逸之
V1