Hydraulic conductivity: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Jwratner1
en>Antandrus
nope, it's horizontal
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{more footnotes|date=October 2010}}
{{for|vectorization as a programming idiom|Array programming}}
'''Automatic vectorization''', in [[parallel computing]], is a special case of automatic [[parallelization]], where a [[computer program]] is converted from a [[scalar (computing)|scalar]] implementation, which processes a single pair of [[operand]]s at a time, to a [[Array data structure|vector]] implementation which processes one operation on multiple pairs of operands at once. As an example, modern conventional computers (as well as specialized [[supercomputers]]) typically have [[vector processing|vector operations]] that perform, e.g. the four additions


:<math>\begin{align}
    c_1 & = a_1 + b_1 \\
    c_2 & = a_2 + b_2 \\
    c_3 & = a_3 + b_3 \\
    c_4 & = a_4 + b_4
\end{align}</math>


all at once. However, in most [[programming language]]s, one typically writes loops that perform additions on many numbers, e.g. (example in [[C (programming language)|C]]):
I woke up the  [http://lukebryantickets.asiapak.net luke bryan new tour dates] other day  and realized - I have been solitary for a little while at the moment and following much bullying from buddies I today find myself opted for online dating. They guaranteed me that there are a lot of regular, sweet and entertaining people to fulfill, therefore the pitch is gone by here!<br>My friends and household are amazing and hanging out with them at tavern gigabytes or dinners is definitely imperative. I have never been into [http://www.bing.com/search?q=dance+clubs&form=MSNNWS&mkt=en-us&pq=dance+clubs dance clubs] as I find that one can never get a good dialogue using the sound. I additionally got 2 definitely cheeky and really adorable puppies who are almost always excited to meet up new people.<br>I try to maintain as toned as possible staying at the gym many times weekly. I enjoy my   [http://lukebryantickets.hamedanshahr.com tour dates for luke bryan] athletics and endeavor to perform or watch as numerous a possible.  [http://www.ffpjp24.org luke bryan concert tour] Being wintertime I shall frequently at Hawthorn suits. Notice: In case that you contemplated shopping a hobby I really don't brain, I've experienced the carnage of wrestling suits at stocktake sales.<br><br>Also visit my web-site; [http://www.netpaw.org Tickets To Luke Bryan]
 
<source lang="c">
for (i=0; i<n; i++)
    c[i] = a[i] + b[i];
</source>
 
The goal of a vectorizing [[compiler]] is to transform such a loop into a sequence of vector operations, that perform additions on length-four (in our example) blocks of elements from the arrays <code>a</code>, <code>b</code> and <code>c</code>. Automatic vectorization is a major research topic in computer science.
 
==Background==
Early computers generally had one logic unit that sequentially executed one instruction on one operand pair at a time. Computer programs and [[programming language]]s were accordingly designed to execute sequentially. Modern computers can do many things at once. Many optimizing compilers feature auto-vectorization, a compiler feature where particular parts of sequential programs are transformed into equivalent parallel ones, to produce code which will well utilize a vector processor. For a compiler to produce such efficient code for a programming language intended for use on a vector-processor would be much simpler, but, as much real-world code is sequential, the optimization is of great utility.
 
'''Loop vectorization''' converts procedural loops that iterate over multiple pairs of data items and assigns a separate processing unit to each pair. Most programs spend most of their execution times within such loops. Vectorizing loops can lead to significant performance gains without programmer intervention, especially on large data sets. Vectorization can sometimes instead slow execution because of [[Pipeline (computing)|pipeline]] synchronization, data movement timing and other issues.
 
[[Intel]]'s [[MMX (instruction set)|MMX]], [[Streaming SIMD Extensions|SSE]], [[Advanced Vector Extensions|AVX]] and [[Power Architecture]]'s [[Altivec|AltiVec]] and [[ARM Holdings|ARM]]'s [[ARM NEON|NEON]] instruction sets support such vectorized loops.
 
Many constraints prevent or hinder vectorization. [[Loop dependence analysis]] identifies loops that can be vectorized, relying on the [[data dependence]] of the instructions inside loops.
 
==Guarantees==
Automatic vectorization, like any [[loop optimization]] or other compile-time optimization, must exactly preserve program behavior.
 
===Data dependencies===
All dependencies must be respected during execution to prevent incorrect outcomes.
 
In general, loop invariant dependencies and lexically forward dependencies can be easily vectorized, and lexically backward dependencies can be transformed into lexically forward. But these transformations must be done safely, in order to assure the dependence between '''all statements''' remain true to the original.
 
Cyclic dependencies must be processed independently of the vectorized instructions.
 
===Data precision===
[[Integer (computer science)|Integer]] [[Precision (computer science)|precision]] (bit-size) must be kept during vector instruction execution. The correct vector instruction must be chosen based on the size and behavior of the internal integers. Also, with mixed integer types, extra care must be taken to promote/demote them correctly without losing precision. Special care must be taken with [[sign extension]] (because multiple integers are packed inside the same register) and during shift operations, or operations with [[carry bit]]s that would otherwise be taken into account.
 
[[Floating-point]] precision must be kept as well, unless [[IEEE-754]] compliance is turned off, in which case operations will be faster but the results may vary slightly. Big variations, even ignoring IEEE-754 usually means programmer error. The programmer can also force constants and loop variables to single precision (default is normally double) to execute twice as many operations per instruction.
 
==Theory==
To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items.
 
===Building the dependency graph===
The first step is to build the [[dependency graph]], identifying which statements depend on which other statements. This involves examining each statement and identifying every  data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements. [[Alias analysis]] can be used to certify that the different variables access (or intersects) the same region in memory.
 
The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction.
 
Suppose the vector size is the same as 4 ints:
 
<source lang="C">
for (i = 0; i < 128; i++) {
  a[i] = a[i-16]; // 16 > 4, safe to ignore
  a[i] = a[i-1]; // 1 < 4, stays on dependency graph
}
</source>
 
===Clustering===
Using the graph, the optimizer can then cluster the [[strongly connected components]] (SCC) and separate vectorizable statements from the rest.
 
For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees.
 
===Detecting idioms===
Some non-obvious dependencies can be further optimized based on specific idioms.
 
For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values ([[Left-hand side and right-hand side of an equation|RHS]]) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment.
 
<source lang="C">
a[i] = a[i] + a[i+1];
</source>
 
Self-dependence by scalars can be vectorized by variable elimination.
 
==General framework==
The general framework for loop vectorization is split into four stages:
* '''Prelude''': Where the loop-independent variables are prepared to be used inside the loop. This normally involves moving them to vector registers with specific patterns that will be used in vector instructions. This is also the place to insert the run-time dependence check. If the check decides vectorization is not possible, branch to '''Cleanup'''.
* '''Loop(s)''': All vectorized (or not) loops, separated by SCCs clusters in order of appearance in the original code.
* '''Postlude''': Return all loop-independent variables, inductions and reductions.
* '''Cleanup''': Implement plain (non-vectorized) loops for iterations at the end of a loop that are not a multiple of the vector size or for when run-time checks prohibit vector processing.
 
==Run-time vs. compile-time==
Some vectorizations cannot be fully checked at compile time. Compile-time optimization requires an explicit array index. Library functions can also defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly.
 
This run-time check is made in the '''prelude''' stage and directs the flow to vectorized instructions if possible, otherwise reverting to standard processing, depending on the variables that are being passed on the registers or scalar variables.
 
The following code can easily be vectorized on compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the execution [[stack (data structure)|stack]].
 
<source lang="C">
int a[128];
int b[128];
// initialize b
 
for (i = 0; i<128; i++)
  a[i] = b[i] + 5;
</source>
 
On the other hand, the code below has no information on memory positions, because the references are [[pointer (computer programming)|pointer]]s and the memory they point to lives in the [[Dynamic memory allocation|heap]].
 
<source lang="C">
int *a = malloc(128*sizeof(int));
int *b = malloc(128*sizeof(int));
// initialize b
 
for (i = 0; i<128; i++, a++, b++)
  *a = *b + 5;
// ...
// ...
// ...
free(b);
free(a);
</source>
 
A quick run-time check on the [[memory address|address]] of both ''a'' and ''b'', plus the loop iteration space (128) is enough to tell if the arrays overlap or not, thus revealing any dependencies.
 
There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes. <ref>[http://dl.acm.org/citation.cfm?id=2254108&CFID=305005555&CFTOKEN=26320981]</ref>
 
==Techniques==
An example would be a program to multiply two vectors of numeric data.  A scalar approach would be something like:
 
<source lang="C">
for (i = 0; i < 1024; i++)
    C[i] = A[i]*B[i];
</source>
 
This could be vectorized to look something like:
 
<source lang="C">
  for (i = 0; i < 1024; i+=4)
    C[i:i+3] = A[i:i+3]*B[i:i+3];
</source>
 
Here, C[i:i+3] represents the four array elements from C[i] to C[i+3] and the vector processor can perform four operations for a single vector instruction. Since the four vector operations complete in roughly the same time as one scalar instruction, the vector approach can run up to four times faster than the original code.
 
There are two distinct compiler approaches: one based on the conventional vectorization technique and the other based on [[loop unwinding|loop unrolling]].
 
===Loop-level automatic vectorization===
This technique, used for conventional vector machines, tries to find and exploit [[SIMD]] parallelism at the loop level. It consists of two major steps as follows.
 
# Find an innermost loop that can be vectorized
# Transform the loop and generate vector codes
 
In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization is [[Instruction level parallelism|true data dependency]] shorter than the vector length. Other obstacles include function calls and short iteration counts.
 
Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example.
* After stripmining
 
<source lang="C">
for (i = 0; i < 1024; i+=4)
    for (ii = 0; ii < 4; ii++)
      C[i+ii] = A[i+ii]*B[i+ii];
</source>
* After loop distribution using temporary arrays
 
<source lang="C">
  for (i = 0; i < 1024; i+=4)
  {
    for (ii = 0; ii < 4; ii++) tA[ii] = A[i+ii];
    for (ii = 0; ii < 4; ii++) tB[ii] = B[i+ii];
    for (ii = 0; ii < 4; ii++) tC[ii] = tA[ii]*tB[ii];
    for (ii = 0; ii < 4; ii++) C[i+ii] = tC[ii];
  }
</source>
* After replacing with vector codes
 
<source lang="C">for (i = 0; i < 1024; i+=4)
  {
    vA = vec_ld( &A[i] );
    vB = vec_ld( &B[i] );
    vC = vec_mul( vA, vB );
    vec_st( vC, &C[i] );
  }
</source>
 
===Basic block level automatic vectorization===
This relatively new technique specifically targets modern SIMD architectures with short vector lengths.<ref>{{Cite journal | last1=Larsen | first1=S. | last2=Amarasinghe | first2=S. | contribution=Exploiting superword level parallelism with multimedia instruction sets | pages=145–156 | title=Proceedings of the ACM SIGPLAN conference on Programming language design and implementation | year=2000 | doi=10.1145/358438.349320 | journal=ACM SIGPLAN Notices | volume=35 | issue=5
}}</ref> Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows.
 
# The innermost loop is unrolled by a factor of the vector length to form a large loop body.
# Isomorphic scalar instructions (that perform the same operation) are packed into a vector instruction if dependencies do not prevent doing so.
 
To show step-by-step transformations for this approach, the same example is used again.
* After loop unrolling (by the vector length, assumed to be 4 in this case)
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
   {
    sA0 = ld( &A[i+0] );
    sB0 = ld( &B[i+0] );
    sC0 = sA0 * sB0;
    st( sC0, &C[i+0] );
          ...
    sA3 = ld( &A[i+3] );
    sB3 = ld( &B[i+3] );
    sC3 = sA3 * sB3;
    st( sC3, &C[i+3] );
  }
</source>
* After packing
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
  {
    (sA0,sA1,sA2,sA3) = ld( &A[i+0:i+3] );
    (sB0,sB1,sB2,sB3) = ld( &B[i+0:i+3] );
    (sC0,sC1,sC2,sC3) = (sA0,sA1,sA2,sA3) * (sB0,sB1,sB2,sB3);
    st( (sC0,sC1,sC2,sC3), &C[i+0:i+3] );
  }
</source>
* After code generation
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
  {
    vA = vec_ld( &A[i] );
    vB = vec_ld( &B[i] );
    vC = vec_mul( vA, vB );
    vec_st( vC, &C[i] );
  }
</source>
Here, sA1, sB1, ... represent scalar variables and vA, vB, and vC represent vector variables.
 
Most automatically vectorizing commercial compilers use the conventional loop-level approach except the IBM XL Compiler,<ref name=ibm/> which uses both.
 
===In the presence of control flow===
The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates.<ref>{{Cite journal | last1=Shin | first1=J. | last2=Hall | first2=M. W. | last3=Chame | first3=J. | contribution=Superword-Level Parallelism in the Presence of Control Flow | pages=165–175
| title=Proceedings of the international symposium on Code generation and optimization | year=2005
| doi=10.1109/CGO.2005.33 |isbn=0-7695-2298-X}}</ref> If the following code is used as an example to show these transformations;
 
<source lang=c>
for (i = 0; i < 1024; i++)
    if (A[i] > 0)
      C[i] = B[i];
    else
      D[i] = D[i-1];
</source>
* After predication
 
<source lang=c>
for (i = 0; i < 1024; i++)
  {
    P = A[i] > 0;
    NP = !P;
    C[i] = B[i];    (P)
    D[i] = D[i-1];  (NP)
  }
</source>
where (P) denotes a predicate guarding the statement.
* After vectorization
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
   {
    vP  = A[i:i+3] > (0,0,0,0);
    vNP = vec_not(vP);
    C[i:i+3] = B[i:i+3];    (vP)
    (NP1,NP2,NP3,NP4) = vNP;
    D[i+3] = D[i+2];        (NP4)
    D[i+2] = D[i+1];        (NP3)
    D[i+1] = D[i];          (NP2)
    D[i]    = D[i-1];        (NP1)
  }
</source>
* After removing vector predicates
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
  {
    vP  = A[i:i+3] > (0,0,0,0);
    vNP = vec_not(vP);
    C[i:i+3] = vec_sel(C[i:i+3],B[i:i+3],vP);
    (NP1,NP2,NP3,NP4) = vNP;
    D[i+3] = D[i+2];        (NP4)
    D[i+2] = D[i+1];        (NP3)
    D[i+1] = D[i];          (NP2)
    D[i]    = D[i-1];        (NP1)
  }
</source>
* After removing scalar predicates
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
  {
    vP  = A[i:i+3] > (0,0,0,0);
    vNP = vec_not(vP);
    C[i:i+3] = vec_sel(C[i:i+3],B[i:i+3],vP);
    (NP1,NP2,NP3,NP4) = vNP;
    if (NP4) D[i+3] = D[i+2];
    if (NP3) D[i+2] = D[i+1];
    if (NP2) D[i+1] = D[i];
    if (NP1) D[i]  = D[i-1];
  }
</source>
 
===Reducing vectorization overhead in the presence of control flow===
Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code the larger the vectorization overhead grows. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions.<ref>{{Cite journal | last=Shin | first=J. | contribution=Introducing Control Flow into Vectorized Code | pages=280–291 | title=Proceedings of the 16th International Conference on Parallel Architecture and Compilation Techniques | year=2007 | doi=10.1109/PACT.2007.41}}</ref> Below, AltiVec predicates are used to show how this can be achieved.
* Scalar baseline (original code)
 
<source lang=c>
for (i = 0; i < 1024; i++)
  {
    if (A[i] > 0)
    {
      C[i] = B[i];
      if (B[i] < 0)
        D[i] = E[i];
    }
  }
</source>
* After vectorization in the presence of control flow
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
  {
    vPA = A[i:i+3] > (0,0,0,0);
    C[i:i+3] = vec_sel(C[i:i+3],B[i:i+3],vPA);
    vT = B[i:i+3] < (0,0,0,0);
    vPB = vec_sel((0,0,0,0), vT, vPA);
    D[i:i+3] = vec_sel(D[i:i+3],E[i:i+3],vPB);
  }
</source>
* After inserting vector branches
 
<source lang=c>
for (i = 0; i < 1024; i+=4)
    if (vec_any_gt(A[i:i+3],(0,0,0,0)))
    {
        vPA = A[i:i+3] > (0,0,0,0);
        C[i:i+3] = vec_sel(C[i:i+3],B[i:i+3],vPA);
        vT = B[i:i+3] < (0,0,0,0);
        vPB = vec_sel((0,0,0,0), vT, vPA);
        if (vec_any_ne(vPB,(0,0,0,0)))
          D[i:i+3] = vec_sel(D[i:i+3],E[i:i+3],vPB);
    }
</source>
There are two things to note in the final code with vector branches; First, the predicate defining instruction for vPA is also included within the body of the outer vector branch by using vec_any_gt. Second, the profitability of the inner vector branch for vPB depends on the conditional probability of vPB having false values in all fields given vPA has false values in all fields.
 
Consider an example where the outer branch in the scalar baseline is always taken, bypassing most instructions in the loop body.  The intermediate case above, without vector branches, executes all vector instructions. The final code, with vector branches, executes both the comparison and the branch in vector mode, potentially gaining performance over the scalar baseline.
 
==See also==
* [[Chaining (vector processing)]]
 
==References==
{{Reflist|refs
<ref name=ibm>{{cite web
|url=http://www.ess.uci.edu/esmf/ibm_compiler_docs/xl_optimization.pdf
|title=Code Optimization with IBM XL Compilers
|date=June 2004
|accessdate=May 2010
}}</ref>
}}
 
{{DEFAULTSORT:Vectorization (Computer Science)}}
[[Category:Compiler optimizations]]
[[Category:Distributed computing problems]]
 
[[de:Vektorisierung]]
[[lt:Vektorizacija]]
[[ja:ベクトル化]]

Latest revision as of 05:30, 4 December 2014


I woke up the luke bryan new tour dates other day and realized - I have been solitary for a little while at the moment and following much bullying from buddies I today find myself opted for online dating. They guaranteed me that there are a lot of regular, sweet and entertaining people to fulfill, therefore the pitch is gone by here!
My friends and household are amazing and hanging out with them at tavern gigabytes or dinners is definitely imperative. I have never been into dance clubs as I find that one can never get a good dialogue using the sound. I additionally got 2 definitely cheeky and really adorable puppies who are almost always excited to meet up new people.
I try to maintain as toned as possible staying at the gym many times weekly. I enjoy my tour dates for luke bryan athletics and endeavor to perform or watch as numerous a possible. luke bryan concert tour Being wintertime I shall frequently at Hawthorn suits. Notice: In case that you contemplated shopping a hobby I really don't brain, I've experienced the carnage of wrestling suits at stocktake sales.

Also visit my web-site; Tickets To Luke Bryan