Optical microscopeThe optical microscope, also referred to as a light microscope, is a type of microscope that commonly uses visible light and a system of lenses to generate magnified images of small objects. Optical microscopes are the oldest design of microscope and were possibly invented in their present compound form in the 17th century. Basic optical microscopes can be very simple, although many complex designs aim to improve resolution and sample contrast. The object is placed on a stage and may be directly viewed through one or two eyepieces on the microscope.
Limiting magnitudeIn astronomy, limiting magnitude is the faintest apparent magnitude of a celestial body that is detectable or detected by a given instrument. In some cases, limiting magnitude refers to the upper threshold of detection. In more formal uses, limiting magnitude is specified along with the strength of the signal (e.g., "10th magnitude at 20 sigma"). Sometimes limiting magnitude is qualified by the purpose of the instrument (e.g., "10th magnitude for photometry") This statement recognizes that a photometric detector can detect light far fainter than it can reliably measure.
X Window SystemThe X Window System (X11, or simply X) is a windowing system for bitmap displays, common on Unix-like operating systems. X provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface - this is handled by individual programs. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces.
Moment magnitude scaleThe moment magnitude scale (MMS; denoted explicitly with or Mw, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. It was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales.
Genetic algorithmIn computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, causal inference, etc.
Combinatorial optimizationCombinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.
CPU cacheA CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have a hierarchy of multiple cache levels (L1, L2, often L3, and rarely even L4), with different instruction-specific and data-specific caches at level 1.
Two-photon excitation microscopyTwo-photon excitation microscopy (TPEF or 2PEF) is a fluorescence imaging technique that is particularly well-suited to image scattering living tissue of up to about one millimeter in thickness. Unlike traditional fluorescence microscopy, where the excitation wavelength is shorter than the emission wavelength, two-photon excitation requires simultaneous excitation by two photons with longer wavelength than the emitted light. The laser is focused onto a specific location in the tissue and scanned across the sample to sequentially produce the image.
Multi-objective optimizationMulti-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.
Approximation algorithmIn computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time.
Time complexityIn computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Reflection seismologyReflection seismology (or seismic reflection) is a method of exploration geophysics that uses the principles of seismology to estimate the properties of the Earth's subsurface from reflected seismic waves. The method requires a controlled seismic source of energy, such as dynamite or Tovex blast, a specialized air gun or a seismic vibrator. Reflection seismology is similar to sonar and echolocation. Reflections and refractions of seismic waves at geologic interfaces within the Earth were first observed on recordings of earthquake-generated seismic waves.