Cuestiones
ayuda
option
Mi Daypo

TEST BORRADO, QUIZÁS LE INTERESEArquitectura de Computadores

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del test:
Arquitectura de Computadores

Descripción:
Recopilacion de los examenes de arquitectura

Autor:
AVATAR

Fecha de Creación:
19/01/2021

Categoría:
Informática

Número preguntas: 84
Comparte el test:
Facebook
Twitter
Whatsapp
Comparte el test:
Facebook
Twitter
Whatsapp
Últimos Comentarios
No hay ningún comentario sobre este test.
Temario:
Moore's Law stopped being fulfilled since 2005 True False.
The term computer architecture: Describes the computer attributes that are visible to the programmer. Describes the internal interconnection of elements in the microprocessor. Describes the logical and physical design of the processor. Describes the physical implementation of the microprocessor.
A Warehouse-Scale Computer is classified as: MIMD SISD MISD SIMD.
Instruction level parallelism Exploits data parallelism applying one instruction to multiple data in parallel. Exploits data parallelism or task parallelism in highly coupled hardare, allowing interactions among threads. Exploits parallelism in highly decoupled tasks. Exploits data parallelism with help of the compiler. .
x86 ISA Uses branching on register values. Is of load/store kind Requires that all memory accesses are aligned. Uses variable length instructions.
Dynamic energy Increases linearly with switching frequency. Increases cuadratically with switching frequency. Increases linearly with voltage. Is the amount of energy needed to switch.
SPECWeb benchmark is A benchmark for servers A benchmark for desktop A benchmark for embedded systems Consists of SPECrate y SPECspeed.
The only metric completely reliable when comparing performance of two computers is: Response time from a benchmark. CPU time from kernels. Execution of synthetic benchmarks. Execution of real programs. .
In a speculative superscalar processor: Hazards detection is speculative. Scheduling is dynamic with speculation. Speculative execution is in-order. Instruction issue is speculative.
A basic block is: A sequence of instructions that does not include load/store operations. A sequence of instructions without branches. A sequence of instructions in which all branches are unconditional. A code block that can be invoked from various points in a program.
In a VLIW (Very Large Instruction Word) processor: Hazard detection is done by hardware. Binary compatibility is not a problem. It is very complicated for the compiler to find parallelism. Generated executable code is more compact.
A correlated predictor (2,2) with 4KB entries requires: 4 KB 8 KB 32 KB 16 KB.
Which of the following is NOT a hazard that may happen in a pipeline? Control hazard. Dependency hazard. Structural hazard. Data hazard.
A RAW hazard: It is also known as true dependency. It is also known as anti-dependence. It is also known as an output dependency. Cannot happen in a five stage MIPS.
The main drawback of instructions dyanmic scheduling is: Cannot tolerate non predictable delays. Needed hardware is more complex. Optimized code for a pipeline does not run efficiently in a different pipeline. It does not manage dependencies known at compile time.
With coarse grain multi-threading: Pipeline must be flushed or frozen. Short and long stalls may be hidden. Separated ROB (reorder buffer) are needed. Processor must be able to switch thread in every clock cycle.
Compiler efectiveness for using delayed branching with one delay slot is approxiamtely: Around 50% of slots are usefully filled. Around 100% of slots are usefully filled. Around 60% of slots are usefully filled. Around 80% of slots are usefully filled.
Using a pipelined architecture: Increases throughput. Decreases throughput. Decreases latency. Keeps throughput unchanged.
Spatial locality principle: Affects to data access, but not to instructions access. Happens when accessing loop control variables. Happens when traversing arrays. Happens when reusing variables.
Hit rate is computed: Dividing the sum of number of hits and misses by the number of hits. Dividing the number of hits by the number of misses. Dividing the number of misses by the number of hits. Dividing the number of hits by the sum of hits and misses. .
When increasing cache size: Mis reate increases. Hit time decreases. Energy consumption increases. Cost is not increased.
When decreasing size of cache memory: Miss rate is decreased. Seek time is decreased. Transfer time is decreased. Associativity is decreased.
In a system with virtual memory, page placement policy is: Direct mapping. There is no placement policy. Fully associative mapping. Set associative mapping.
Impure virtualization is: A technique based in Intel-VT extensions. A technique for virtualizing an ISA on a different ISA. A solution for architectures that are not fully virtualizable. A solution for fully virtualizable architectures.
A multiprocessor is: A computer consisting of highly coupled processors coordinated by a single operating system. A computer consisting of highly coupled processors coordinated by multiple operating systems. A computer consisting of multiple cores each of them with its own virtual memory space. Several processors integrated in a board that do not no input/output element.
A memory system is coherent if: Any read from a memory location returns the most recent value that was written to that memory location. Any read from a memory location returns the most recent value that was read from that memory location. Any write to a memory location returns the most recent value that was read from that memory location. Any write to a memory location returns the most recent value that was written to that memory location.
DSM is: A kind of multiprocessor with distributed shared memory. A processor with centralized memory and non uniform access. A kind of multiprocessor with shared centralized memory. Dynamic Shared Memory, a multiprocessor with dynamic virtual memory.
Select what of the following characteristics does NOT belong to a centralized directory: Different coherence requests go to different directories. May lead to scalability problems, as number of processors increase. Is a bottleneck. Avoids broadcasting.
Release/acquire consistency model: In contrast with weak consistency, it does not distinguish synchronization operations. Is a theoretical model with no practical implementation. Is less relaxed than weak consistency. Is more relaxed than weak consistency. .
Memory consistency models: Specify the memory view offered to the programmer. Only establish what messages must be exchanged between caches to keep coherence and when those exchanges happen. Establish how is the arbitration of the memory access bus. Are fundamental for building multiprocessor systems based in message passing.
The relaxed consistency memory model named weak consistency or weak ordering: Interleaves data operations and synchronization operations. Is a theoretical model that has no practical implementation. In some cases allows to reorganize synchronization operations. Assumes that data operations reordering between synchronization operations does not affect program correctness. .
In shared memory synchronization, Test and Set is: An atomic sequence that transfers the data item from a memory location to a register and writes "1" in that memory location. Is a sequence of data exchange between muticores, performed atomically. An atomic sequence that transfers a data item from a memory location to a register and writes "0" in that memory location. A sequence that transfers a data item from a memory location to a register and writes "1" in that memory location.
Busy waiting in thread synchronization: The process waits a defined time and if a condition is not satisfied the operation is aborted. Is a synchronization mechanism in which the process remains blocked in an active queue. Is performed completely in user mode. The process waits indefinitely and remains blocked forever.
Which argument passing is conceptually equivalent to passing by const reference? Pass by pointer Pass by pointer to const Pass by value Pass by const pointer.
In the variable declaration vector<int> v(5); The variable v is: An array of 1 integer element initialized to value 5 An array of 5 integer elements whose values are initialized to 0 An array of 5 integer elements whose values are unknown An array of 0 integers that can grow until a size of 5.
In OpenMP, the dynamic scheduling Uses iteration blocks of fixed size Uses iteration blocks of increasing size Uses a varying number of threads Uses iteration blocks of decreasing size.
In OpenMP a barrier... Can be used within a parallel region Is never implicit Is obtained by calling function omp_barrier() Needs to be explicit at the beginning of every for loop.
An object of type condition_variable It is optimized to be used with std::mutex but it can be used with other mutex types It can only be used in conjunction with an object of type std::mutex Is guaranteed to be lock free in all platforms It can be used in conjunction with an object of type std::mutex or std::recursive::mutex.
In C++11, function std::lock() Can take one or more objects and each of then can be either a std::mutex or a std::unique_lock Can only take one or more std::unique_lock object Can only take one or more std::mutex objects It does not exist. It is a member function of std::mutex.
In class std::condition_variable_any, member function notify_all() Awakens all threads that are waiting in all condition variables Sends a message to all threads in the application Awakens one thread that is waiting in that condition variable Awakens all thread that are waiting in that condition variable .
In C++11, if two threads try to access the same memory location and both accesses are a read... There is a potential race condition if some access is non atomic There is a potential race condition There is no potential race condition if an ordering is enforced The result of both accesses is deterministic .
In C++11, if two threads try to access the same memory location simultaneously and any access is a write... There is no way to prevent the race condition There is a potential race condition unless an ordering between both accesses is enforced There is never a potential race condition unless they are adjacent bit fields There is a potential race condition that can only be prevented with an std::mutex.
In C++11, the atomic operation compare_exchange_weak() Allows for spurious failures Uses by default relaxed consistency Uses release/acquire consistency Does not allow for spurious failures.
A Web server uses the 90% of its time to computing tasks and the rest to input/output operations. If the processor is replaced by another that can perform computations 18 times faster, What is the global speedup? IMPORTANT: Please, provide your answer as a a value with two digits in the fractional part (e.g. 3.27) 6.67 6.87 6.77 6.50.
A computer runs instructions at 2 cycles per instruction when all cache accesses (both instructions and data) are hits. The processor runs at a frequency of 1 GHz. The only instructions performing data accesses are load instructions (read from memory) and store instructions (write to memory). Load instructions are 25% of the total number of executed instructions and store instructions are 20%. The miss penalty in the instruction cache 26 nanoseconds and the hit rate is 88%. The miss penalty in the data cache 64 nonseconds and the hit rate is 97%. Compute the average instruction execution time (in nanoseconds) when there are misses in both caches as specified above. 9.968 6.000 7.321 8.500.
A system has three levels of cache memory. When running a given application the following values are obtained for the hit rate when accessing data. L1: 0,8 L2: 0,8 L3: 0,1 Compute the global hit rate. 0,96 0,996 0,064 0,936.
Which is the unit used to measure execution time of a program in EduMips64? In nanoseconds In microseconds In seconds In cycles .
Every how many cycles can we start a new floating point addition (EduMips64)? 1 7 4 24.
What does LL cache stands for? L2 cache. L3 cache. Last level cache Load level cache.
Most I/O devices are faster than the CPU. True False.
A buffer is created with the following statement: seq_buffer b{100}; What is the maximum number of elements that b can hold? 100 101 99 buffers do not hold elements.
The faster the memory, the closer it is placed to the CPU. True False.
Some instructions inside a virtual machine can execute as fast as the hardware True False.
The fastest computer will be the one with the highest clock rate True False.
A larger cache block size for the same cache capacity reduces the miss rate. True False.
A higher cache associativity reduces conflict misses. True False.
Data hazard stalls in a processor pipeline can be minimized by forwarding True False.
“Predicted untaken” scheme is a branch prediction scheme in which instructions are fetched in the order in which they are stored in memory. True False.
Write propagation means writes by a CPU become visible to other CPUs writes by a CPU are updated in the local cache Writes to a cache are updated in the main memory.
Intuitively if a memory location is more intensively read than written by several CPUs the most adequate cache coherence protocol is update invalidate.
Which orders are relaxed in the processor consistency model? read after read write after read read after write write after write true dependencies none of the above.
Test-and-test-and-set compares twice the same value of a register with the same value of a memory location. True False.
The orders relaxed based on a consistency models can be used for reordering by Only static methods (compilers) Only dynamic methods (hardware) Both static and dynamic methods Neither static or dynamic methods.
When store conditional fails generates bus traffic. True False.
A benchmark is: The maximum performance a computer can achieve A grade the performance of a computer is assigned A program that assesses the performance of a computer.
Which of the following lock implementation requires most storage? test-and-set test-and-test-and-set test-and-set with backoff load-locked/store conditional array-based lock.
The speedup is NOT The performance gain when executing on a faster architecture. The performance gain when applying an enhancement to an application. The performance gain when running on larger number of nodes The superior speed of an application over another application.
A structural hazard arises: When an instruction depends on the result of a previous instruction From pipelining of branches that change the program counter (PC) From resource conflicts, when hardware cannot support simultaneously all instructions in overlapped execution.
A control dependency does not allow for static scheduling. True False.
Loop unrolling is a run-time technique. True False.
Reducing the miss rate reduces the miss penalty True False.
Which cache structure is the most simple to implement (less hardware)? associative 2-way set associative direct-mapped.
A page table may contain the mapping of a virtual page to a disk block. True False.
A virtual machine has a virtual memory on top of the virtual memory of a physical machine True False.
For which of the following classes of computers is the throughput most important a) mobile devices desktops servers embedded computers.
n-bit predictors are static branch prediction techniques True False.
Loop unrolling is a compile-time technique True False.
When data is in the cache (hit), write operations are faster for write-through than for write back caches. True False.
The second level of a two-level page table can never be stored ONLY on disk True False.
Giving priority to reads over writes targets to: reduce miss penalty reduce miss rate increase cache bandwidth reduce hit time.
Using multilevel caches targets to: reduce miss penalty reduce miss rate increase cache bandwidth reduce hit time.
Using multiple bank memories targets to: reduce miss penalty reduce miss rate increase cache bandwidth reduce hit time.
An application kernel is: An operating system designed for just one application a set of instructions of an applications that cause most performance problems a key piece of a real application.
Merging write buffer targets to: reduce miss penalty reduce miss rate increase cache bandwidth reduce hit time.
Denunciar test Consentimiento Condiciones de uso