Tutorial

The cinterface framework is meant to be used when developing low-level interfaces to complex textual or binary files, where the explicit modeling of the file schema is a desired feature. The abstractions defined by the framework allow the user to divide the modeling of file schemas in meaningful pieces, enabling reuse of schemas and content while reading and writing files.

Three main file types are provided by the framework: BlockFile, SectionFile and RegisterFile. Each of these file models are meant to use in specific situations.

BlockFiles are files which can be modeled as blocks of text (or bytes), which can be easily identified by a given beginning pattern and an optional ending pattern. These patterns can be given as regexes for the entity definition in the framework. The steps of reading a block are up to the user to define by overloading the read function.

SectionFiles are a special case of BlockFiles that don’t follow beginning or ending patterns, but can be divided in sections, which are direct divisions of the file content in a subset of lines or bytes. These are usually the simplest files, where the content does not vary in amount or contain repeating pieces.

RegisterFiles can be seen also as a special case of BlockFiles, where each block contain only one line. The actual implementation of the internals of a RegisterFile differ, however, from a BlockFile. Using the Line entity from the framework, RegisterFiles are made in a way that many registers can be defined with little overhead from the developer, allowing to model an extensive set of patterns with little code maintenance.

Each of the files, together with some additional details on another abstractions provided by the framework, will be briefly shown in the following. For more details on each approach, check on the examples page. If your use case is not actually covered, please contribute with an Issue.

Fields

The most fundamental components in the cinterface framework are the Fields. Being defined for both textual and binary interfaces, fields are containers for one specific data value, with a specific formatting, located in a file. The common base Field class is used for defining generic reading and writing functions, which implementations are given by each specific subclass.

Each specific Field contain its own arguments. For instance, if the file being modeled contains a line in which a specific literal data is desired:

1 |   Username (max. 30 chars)   | ...
2 |   FooBar                     | ...

One can see that the Username begins in the second column and will contain at most 30 characters. So, one can define a LiteralField that will extract, format and store the content:

1 from cfinterface import LiteralField
2
3 username = LiteralField(size=30, starting_position=1)
4
5 value = username.read("|   FooBar                     | ...")
6 # The content "FooBar" can be accessed by both value and username.value
7 assert value == "FooBar"
8 assert username.value == "FooBar"

Other fields are used for storing numeric values, such as IntegerField and FloatField. The DatetimeField is used specifically for constructing an datetime object directly from the file contents following one or more format strings.

Line

Usually a line in a file contain more than just one piece of desired information. In this case, the Field component is not enough for being able to model the given line for reading and writing. In these cases, the Line component is the one suited for the task, being defined as a simple collection of fields. In the previous example, suppose that the actual file lines contain more than just the username:

1 |   Username (max. 30 chars)   | Signup Date | Age | Balance ($) | ...
2 |   FooBar                     |  2020-05-20 |  18 |       99.90 | ...

The line contents now are modeled by a list of fields, which define a Line:

 1 from datetime import datetime
 2
 3 from cfinterface import LiteralField, DatetimeField, IntegerField, FloatField, Line
 4
 5 username = LiteralField(size=30, starting_position=1)
 6 signup_date = DatetimeField(size=13, starting_position=32, format="%Y-%m-%d")
 7 age = IntegerField(size=5, starting_position=46)
 8 balance = FloatField(size=13, starting_position=52, decimal_digits=2)
 9
10 line = Line(fields=[username, signup_date, age, balance])
11
12 values = line.read("|   FooBar                     |  2020-05-20 |  18 |       99.90 | ...")
13 assert values == ["FooBar", datetime(2020, 5, 20), 18, 99.90]

Blocks and BlockFiles

Suppose there is a file which content resembles the following

 1MY_FIRST_BLOCK_BEGINNING_PATTERN
 2I have some raw text for describing my content, because I was mean for being irectly read by someone.
 3
 4Now I have some data. After which I will be done.
 5
 6Date       Index     Value
 72020/01        1    1000.0
 82020/02        1    2000.0
 92020/01        2    3000.0
102020/02        2    4000.0
112020/01        3    5000.0
122020/02        3    6000.0
13MY_FIRST_BLOCK_ENDING_PATTERN
14
15MY_SECOND_BLOCK_BEGINNING_PATTERN
16My content is completely different from the previous block...
17  Username    Last Login
18     admin    1996/01/01
19  sunshine    2000/01/01
20 pineapple    1996/01/01
21     admin    1996/01/01
22MY_SECOND_BLOCK_ENDING_PATTERN
23
24MY_FIRST_BLOCK_BEGINNING_PATTERN
25...
26MY_FIRST_BLOCK_ENDING_PATTERN
27
28MY_FIRST_BLOCK_BEGINNING_PATTERN
29...
30MY_FIRST_BLOCK_ENDING_PATTERN
31
32MY_SECOND_BLOCK_BEGINNING_PATTERN
33...
34MY_SECOND_BLOCK_ENDING_PATTERN
35...

One may notice that the file is composed of two blocks of content that have clear beginning and ending patterns, but are written without an specific order in the file. Even the number of repetitions of both blocks cannot be discovered without parsing the whole file at least once. In this case, a BlockFile is the best approach for modeling it.

One possible approach for modeling the file using the BlockFile abstraction is:

  1 from typing import IO
  2 import pandas as pd
  3
  4 from cfinterface import IntegerField, FloatField, DatetimeField, LiteralField
  5 from cfinterface import Line, Block, BlockFile
  6
  7 class FirstBlock(Block):
  8
  9     __slots__ = [
 10         "__header_lines",
 11         "__line_model",
 12     ]
 13
 14     BEGIN_PATTERN = "MY_FIRST_BLOCK_BEGINNING_PATTERN"
 15     END_PATTERN = "MY_FIRST_BLOCK_ENDING_PATTERN"
 16
 17     NUM_HEADER_LINES = 5
 18
 19     def __init__(self, previous=None, next=None, data=None) -> None:
 20         super().__init__(previous, next, data)
 21         self.__header_lines = []
 22         date_field = DatetimeField(size=7, starting_position=0, format="%y/%m")
 23         index_field = IntegerField(size=4, starting_position=11)
 24         value_field = FloatField(size=6, starting_position=19)
 25         self.__line_model = Line([date_field, index_field, value_field])
 26
 27     def __eq__(self, o: object) -> bool:
 28         if not isinstance(o, FirstBlock):
 29             return False
 30         block: FirstBlock = o
 31         if not all(
 32             [
 33                 isinstance(self.data, pd.DataFrame),
 34                 isinstance(block.data, pd.DataFrame),
 35             ]
 36         ):
 37             return False
 38         else:
 39             return self.data.equals(block.data)
 40
 41     # Override
 42     def read(self, file: IO, *args, **kwargs) -> bool:
 43
 44         # Discards the line with the beginning pattern
 45         file.readline()
 46
 47         # Reads header lines for writing later
 48         for _ in range(self.__class__.NUM_HEADER_LINES):
 49             self.__header_lines.append(file.readline())
 50
 51         # Reads the data content
 52         dates = []
 53         indices = []
 54         values = []
 55
 56         while True:
 57
 58             line = file.readline()
 59             if FirstBlock.ends(line):
 60                 self.data = pd.DataFrame({"Date": dates, "Index": indices, "Value": values)
 61                 break
 62
 63             date, index, value = self.__line_model.read()
 64             dates.append(date)
 65             indices.append(index)
 66             values.append(value)
 67
 68     # Override
 69     def write(self, file: IO, *args, **kwargs):
 70
 71         file.write(self.__class__.BEGIN_PATTERN + "\n")
 72
 73         # Writes header lines
 74         for line in self.__header_lines:
 75             file.write(line)
 76
 77         # Writes data lines
 78         for _, line in self.data.iterrows():
 79             self.__line_model.write([line["Date"], line["Index"], line["Value"]])
 80
 81         file.write(self.__class__.END_PATTERN + "\n")
 82
 83
 84 class SecondBlock(Block):
 85     # Implement in a similar way for the second block specifics
 86
 87
 88 class MyBlockFile(BlockFile):
 89
 90     BLOCKS = [
 91         FirstBlock,
 92         SecondBlock,
 93     ]
 94
 95     # All the reading and writing logic is done by the framework,
 96     # finding when each block begins and calling their implemetations.
 97     # The user can implement some properties for better suiting its use cases:
 98
 99     @property
100     def first_block_data(self) -> pd.DataFrame:
101         block_dfs = [b.data for b in self.data.get_blocks_of_type(FirstBlock)]
102         return pd.concat(block_dfs, ignore_index=True)
103
104 file = MyBlockFile.read("/path/to/file_describe_above.txt")
105 assert type(file.first_block_data) == pd.DataFrame
106 file.write("/path/to/some_other_desired_file.txt")
107 # The content of the written file should be the same
108 # as the source file

As one can see, the read and write methods are implemented in a generic way in the base BlockFile class, and will deal with any of the block types informed in the BLOCKS class attribute. However, each Block must implement its own read and write methods, which will be called when the BlockFile class successfully matches one of the BEGIN_PATTERN expressions. All the blocks that were successully read will be stored in the data field, accessible inside the built file object. This is a BlockData object, which implements a double linked list of blocks that were parsed from the given file.

The developer may edit any of the desired blocks or any of its fields. When calling the write function, all blocks will be written to the file, following the login of its own write function, implemented by the developer.

Any content in the file that was not matched as being in any of the given blocks is stored as an instance of the DefaultBlock object, which is an one-line block for reproducing the entire file contents when writing it back.

Registers and RegisterFiles

A special case of blocks in a file is when the length of each block is exactly 1. In this case, each block is a single line, and defining all the requirements of the Block + BlockFile approach can be a little too much.

For handling this special case, the developer can use another approach, which is defined by RegisterFile abstraction, together with the Register components.

Suppose there is a file with the following content

1DATA_HIGH  ID001   sudo.user  10/20/2025  901.25
2DATA_HIGH  ID002   sudo.user  10/21/2025  100.20
3DATA_HIGH  ID003   test.user  10/30/2025  100.20
4
5DATA_LOW   01/01/2024   105.23
6DATA_LOW   01/02/2024    29.15
7DATA_LOW   01/03/2024     5.05

In this case, each line is defined by an unique beginning pattern in its first columns, together with a set of fields that are positioned on different places depending on the beginning pattern.

Each pattern determines a different Register class to be built, and the entire file can have a variable number of objects for each register.

One possible approach for modeling the file using the RegisterFile abstraction is:

  1 from typing import Union, Optional, List
  2 from datetime import datetime
  3 import pandas as pd
  4
  5 from cfinterface import IntegerField, FloatField, DatetimeField, LiteralField
  6 from cfinterface import Line, Register, RegisterFile
  7
  8 class DataHigh(Register):
  9     IDENTIFIER = "DATA_HIGH"
 10     IDENTIFIER_DIGITS = 9
 11     LINE = Line(
 12         [
 13             LiteralField(size=6, starting_position=11),
 14             LiteralField(size=9, starting_position=19),
 15             DatetimeField(size=10, starting_position=30, format="%M/$d/%Y"),
 16             FloatField(size=6, starting_position=42, decimal_digits=2),
 17         ]
 18     )
 19
 20     @property
 21     def field_id(self) -> str:
 22         """
 23         Identifier of the DataHigh element.
 24         """
 25         return self.fdata[0]
 26
 27     @field_id.setter
 28     def field_id(self, v: str):
 29         self.data[0] = v
 30
 31     @property
 32     def user(self) -> str:
 33         """
 34         User associated with the DataHigh element.
 35         """
 36         return self.fdata[1]
 37
 38     @user.setter
 39     def user(self, v: str):
 40         self.data[1] = v
 41
 42     @property
 43     def date(self) -> datetime:
 44         """
 45         Date associated with the DataHigh element.
 46         """
 47         return self.data[2]
 48
 49     @date.setter
 50     def date(self, v: datetime):
 51         self.data[2] = v
 52
 53     @property
 54     def value(self) -> float:
 55         """
 56         Value associated with the DataHigh element.
 57         """
 58         return self.data[3]
 59
 60     @value.setter
 61     def value(self, v: float):
 62         self.data[3] = v
 63
 64
 65 class DataLow(Register):
 66     IDENTIFIER = "DATA_LOW"
 67     IDENTIFIER_DIGITS = 8
 68     LINE = Line(
 69         [
 70             DatetimeField(size=10, starting_position=11, format="%M/$d/%Y"),
 71             FloatField(size=6, starting_position=24, decimal_digits=2),
 72         ]
 73     )
 74
 75
 76     @property
 77     def date(self) -> datetime:
 78         """
 79         Date associated with the DataLow element.
 80         """
 81         return self.data[0]
 82
 83     @date.setter
 84     def date(self, v: datetime):
 85         self.data[0] = v
 86
 87     @property
 88     def value(self) -> float:
 89         """
 90         Value associated with the DataLow element.
 91         """
 92         return self.data[1]
 93
 94     @value.setter
 95     def value(self, v: float):
 96         self.data[1] = v
 97
 98
 99 class MyRegisterFile(RegisterFile):
100
101     REGISTERS = [
102         DataHigh,
103         DataLow,
104     ]
105
106     # All the reading and writing logic is done by the framework,
107     # finding which register is in each line and calling their implemetations.
108     # The user can implement some properties for better suiting its use cases:
109
110     @property
111     def data_high(self) -> Optional[Union[DataHigh, List[DataHigh]]]:
112         return self.data.get_registers_of_type(DataHigh)
113
114     @property
115     def data_low(self) -> Optional[Union[DataLow, List[DataLow]]]:
116         return self.data.get_registers_of_type(DataLow)
117
118 file = MyRegisterFile.read("/path/to/file_describe_above.txt")
119 assert len(file.data_high) == 3
120 assert file.data_high[0].field_id == "ID001"
121 file.write("/path/to/some_other_desired_file.txt")
122 # The content of the written file should be the same
123 # as the source file

As one can see, the read and write methods are implemented in a generic way in the base RegisterFile class, and will deal with any of the register types informed in the REGISTER class attribute. All the registers that were successully read will be stored in the data field, accessible inside the built file object. This is a RegisterData object, which implements a double linked list of registers that were parsed from the given file.

The developer may edit any of the desired registers or any of its fields. When calling the write function, all registers will be written to the file, following the formatting of each of field.

Any content in the file that was not matched as being in any of the given registers is stored as an instance of the DefaultRegister object, which is an one-field register for reproducing the entire file contents when writing it back.

Sections and SectionFiles

Another special case of blocks in a file is when the beginning pattern of each block does not matter. In this case, the file to be modeled is usually well determined in terms of content and ordering. However, if the developer also models others files using the BlockFile and RegisterFile approaches and wants to maintain all the files in the same framework, the Section and SectionFile can be used. Also, following the framework allows versioning each file part separately, which can be useful for schemas that change over time.

Suppose there is a file with the following content

 1Date       Index     Value
 2----------------------------
 32020/01        1    1000.0
 42020/02        1    2000.0
 52020/01        2    3000.0
 62020/02        2    4000.0
 72020/01        3    5000.0
 82020/02        3    6000.0
 9----------------------------
10
11  Username    Last Login
12-------------------------
13     admin    1996/01/01
14  sunshine    2000/01/01
15 pineapple    1996/01/01
16     admin    1996/01/01
17-------------------------

If the file is such that always these two tables will be exhibited, in the same order, and there is no chance of repeating these information blocks, the SectionFile approach can be used. Also, one may note that there is no such clear beginning and ending patterns like the previous BlockFile example.

One possible approach for modeling the file using the SectionFile abstraction is:

 1 from typing import Union, Optional, List
 2 from datetime import datetime
 3 import pandas as pd
 4
 5 from cfinterface import IntegerField, FloatField, DatetimeField, LiteralField
 6 from cfinterface import Line, Section, SectionFile
 7
 8 class FirstSection(Section):
 9
10     __slots__ = [
11         "__line_model",
12     ]
13
14     HEADER_LINE = "Date       Index     Value"
15     MARGIN_LINE = "----------------------------"
16
17     def __init__(self, previous=None, next=None, data=None) -> None:
18         super().__init__(previous, next, data)
19         date_field = DatetimeField(size=7, starting_position=0, format="%y/%m")
20         index_field = IntegerField(size=4, starting_position=11)
21         value_field = FloatField(size=6, starting_position=19)
22         self.__line_model = Line([date_field, index_field, value_field])
23
24     def __eq__(self, o: object) -> bool:
25         if not isinstance(o, FirstSection):
26             return False
27         block: FirstSection = o
28         if not all(
29             [
30                 isinstance(self.data, pd.DataFrame),
31                 isinstance(block.data, pd.DataFrame),
32             ]
33         ):
34             return False
35         else:
36             return self.data.equals(block.data)
37
38     # Override
39     def read(self, file: IO, *args, **kwargs) -> bool:
40
41         # Discards the line with the header and margin
42         for _ in range(2):
43             file.readline()
44
45         # Reads the data content
46         dates = []
47         indices = []
48         values = []
49
50         while True:
51
52             line = file.readline()
53             if self.MARGIN_LINE in line:
54                 self.data = pd.DataFrame({"Date": dates, "Index": indices, "Value": values)
55                 break
56
57             date, index, value = self.__line_model.read()
58             dates.append(date)
59             indices.append(index)
60             values.append(value)
61
62     # Override
63     def write(self, file: IO, *args, **kwargs):
64
65         file.write(self.HEADER_LINE + "\n")
66         file.write(self.MARGIN_LINE + "\n")
67
68         # Writes data lines
69         for _, line in self.data.iterrows():
70             self.__line_model.write([line["Date"], line["Index"], line["Value"]])
71
72         file.write(self.MARGIN_LINE + "\n")
73
74
75 class SecondSection(Section):
76     # Implement in a similar way for the second section specifics
77
78
79 class MySectionFile(SectionFile):
80
81     SECTIONS = [
82         FirstSection,
83         SecondSection,
84     ]
85
86     # All the reading and writing logic is done by the framework.
87     # The user can implement some properties for better suiting its use cases:
88
89     @property
90     def first_section_data(self) -> pd.DataFrame:
91         s = self.data.get_sections_of_type(FirstSection)
92         return s.data
93
94 file = MySectionFile.read("/path/to/file_described_above.txt")
95 assert type(file.first_section_data) == pd.DataFrame
96 file.write("/path/to/some_other_desired_file.txt")
97 # The content of the written file should be the same
98 # as the source file

As one can see, the read and write methods are implemented in a generic way in the base SectionFile class, and will call the specific section functions in the same order that they were declared in the SECTION class attribute. All the sections that were successully read will be stored in the data field, accessible inside the built file object. This is a SectionData object, which implements a double linked list of sections that were parsed from the given file.

The developer may edit any of the desired sections or any of its fields. When calling the write function, all sections will be written to the file, following the formatting of each of field.

All data that may exist in the file after the last modeled section will be read as DefaultSection objects. These are one-line sections used for compatibility with data not explicitly modeled by the developer.

File Encodings

Currently, when modeling a file through any of the aforementioned approaches, the developer can choose the preferred encoding for reading and writing. Also, instead of a single encoding, a list of encodings can be supplied, which will be used for reading an writing.

Some ways for specifying encodings are:

 1 from typing import IO
 2 import pandas as pd
 3
 4 from cfinterface import BlockFile
 5
 6 class MyBlockFileWithSingleEncoding(BlockFile):
 7
 8     ENCODING = "utf-8"
 9
10
11 class MyBlockFileWithManyEncodings(BlockFile):
12
13     ENCODING = ["utf-8", "latin-1", "ascii"]

When reading, each of the supplied encodings will be used, in order. The first encoding to successfully parse the whole file will end the reading process. For writing, the file model will always use the first encoding of the list.

Modeling Binary Files

When a file contains data encoded in binary format instead of textual, the cfinterface framework is still applied for modeling its contents, supporting reading and writing. The same file models can be used, but with some differences in the meaning of some fundamental actions, which are better illustrated in the examples page.

For defining a file model as binary, one may set the class attribute:

 1 from typing import IO
 2 import pandas as pd
 3
 4 from cfinterface import BlockFile
 5
 6 class MyTextualFile(BlockFile):
 7
 8     STORAGE = "TEXT"
 9
10 class MyBinaryFile(BlockFile):
11
12     STORAGE = "BINARY"

Versioning Files

Files can change their schema with time, resulting in multiple versions. One approach is to define multiple file models, but this could result in large amounts of copied and pasted code, since the changes in the schemas could be minimal.

The cfinterface supports file versioning by allowing the user to define the lists of elements (Blocks, Registers or Sections) that exist on each version of the file.

For an example, suppose there is a file that was versioned. The file always contained two blocks, but one of them had a small change on its schema when the file version evolved from version 1.0 to 2.0. In this case, the developer might do:

 1 from typing import IO
 2 import pandas as pd
 3
 4 from cfinterface import Block, BlockFile
 5
 6 class MyConstantBlock(Block):
 7     pass
 8
 9 class MyVersionedOldBlock(Block):
10     pass
11
12 class MyVersionedNewBlock(Block):
13     pass
14
15 class MyVersionedFile(BlockFile):
16
17     VERSIONS = {
18         "1.0": [
19             MyConstantBlock,
20             MyVersionedOldBlock
21         ],
22         "2.0": [
23             MyConstantBlock,
24             MyVersionedNewBlock
25         ]
26     }
27
28
29 MyVersionedFile.set_version("1.0")
30 old_file = MyVersionedFile.read("path/to/old/file")
31 MyVersionedFile.set_version("2.0")
32 new_file = MyVersionedFile.read("path/to/new/file")

In this case, the default behavior for any file is reading on its latest version. If the user desires to read a previous version of the file, one can use the set_version class method for changing its behavior. The version comparison in done by Python native str comparison.

If one chooses to set a version V, which is a non-existent key in the VERSIONS dict, the framework looks for the first version W among the dict keys such that W <= V. If no such key is found, the latest version is used. As an example in the previous code block, setting the MyVersionedFile version to 1.5 would fallback to 1.0. However, setting it to 0.5 would result in using the latest 2.0 version.