In computer science, dataflow refers to the way data moves through a computer application. This process maps how data passes from one module to another within a program. This typically starts with data entered onto a computer screen and ends with a data storage device.
Dataflow design relies on special software diagrams called data flow diagrams (DFD). These diagrams graphically map how data is transmitted throughout a computer program. The DFD is essential in architecture design because it defines what data is needed to complete specific business functions.
The dataflow diagram approach has been used for several decades and provides detailed information on how the data is manipulated within a program. Most DFDs are required as a standard practice in design documentation for software programs.
Dataflow analysis is a computer engineering job in which a company's data is reviewed. This analysis helps a company determine what data is available for reporting and dissemination purposes. The analyst typically prepares charts and workflows that define how data is used by computer programs.
A network engineer manages the flow of data packets on a computer network. This person ensures that information moves seamlessly throughout the company's computer network. Most network engineers rely on dataflow diagrams to determine potential bottlenecks of data in software applications.
Network traffic patterns and data packet sizes are also important to network dataflow analysis. Understanding these helps a company determine the volume and bandwidth requirements of a computer network. The size and frequency of data determine how much bandwidth is required.
Dataflow programming is often used in accounting and finance applications. These programs attach mathematical equations to specific fields on a computer screen. When the user changes the value of the field, the equation automatically calculates the appropriate value for another data element. This is often seen in tax preparation software.
A dataflow diagram should also define how data is changed during an error situation. This helps designers determine where error management processes are needed. This negative logical design helps to ensure the system performs as expected in both positive and negative situations.