Lines Matching refs:we

13 There are three ways we can test XFS :
19 But before that we will need to create XFS images for all testing purposes.
24 Currently only linux has full XFS support so we will use linux for generating file system images.
26 First we need to create an empty sparse image using command::
36 Do note that we can create images of whatever size or name we want, for example the above command
37 creates fs.img of size 5 GB, if we alter seek = 10G it will create fs.img with size 10 GB.
76 Now we can mount our file system image and create entries for testing XFS haiku driver.
85 First we have to compile it::
93 Where fs.img is the file system image we created from linux kernel.
98 First build a version of haiku with XFS support, to do this we need to add "xfs" to the `image
127 Whenever we run a file system under fs_shell we can't use system headers, fs_shell compatible
128 headers are there which needs to be used whenever we try to mount XFS file system using xfs_shell.
129 To resolve this problem we use **system_dependencies.h** header file which takes care to use
130 correct headers whenever we mount XFS file system either using xfs_shell or directly inside Haiku.
166 repeat same checks again and again for all headers we created a *VerifyHeader* template
179 in read format. When we will have write support for XFS we will only support V2 and V3 inodes.
195 Since size of inodes could differ based on different versions of XFS we pass CoreInodeSize()
220 * When the number of entries inside directory are small enough such that we can store all
223 * The header for ShortForm entries is located at data fork pointer inside inode, which we cast
225 * Since number of entries are short we can simply iterate over all entries for *Lookup()* and
230 * When number of entries expand such that we can no longer store all directory metadata
231 inside inode we use extents.
233 * In Block directory we have a single directory block for Data header, leaf header
236 * Since XFS V4 & V5 data headers differs we use a virtual class *ExtentDataHeader* which
240 * Since now we have a virtual class with V_PTRS we need to be very careful with data stored
241 ondisk and data inside class, for example we now can't use sizeof() operator on class to
244 * In *GetNext()* function we simply iterate over all entries inside buffer, though a found
245 entry could be unused entry so we need to have checks if a entry found is proper entry.
246 * In *Lookup()* function first we generate a hash value of entry for lookup, then we find
248 At last if entry matches we return B_OK else we return B_ENTRY_NOT_FOUND.
252 * When number of entries expand such that we can no longer store all directory metadata inside
253 directory block we use leaf format.
254 * In leaf directory we have a multiple directory block for Data header and free data header,
256 * To check if given extent based inode is leaf type, we simply check for offset inside last
259 * Since XFS V4 & V5 leaf headers differs we use a virtual class *ExtentLeafHeader* which acts
263 * Instead of sizeof() operator on ExtentLeafHeader we should always use *SizeOfLeafHeader()* funct…
265 * *Lookup()* and *GetNext()* functions are similar to block directories except now we don't use si…
281 to read all data of file we simply iterate over all extents which is very similar to how we
284 When the file becomes too large such that we cannot store more extent maps inside inode the
287 first we read blocks of B+Tree to extract extent maps and then read extents
294 Currently we only have read support for XFS, below briefly summarises read support for all formats.
330 Currently we have no extended attributes support for xfs.
335 Currently we have no symlinks support for xfs.
344 Currently we have no support.
347 Currently we have no support, this data structure is still under construction
351 Currently we have no support, this data structure is still under construction
357 Currently we have no write support for xfs.