| Commit message (Collapse) | Author |
|
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
While at it, replace the cgit_[lib_]error constants with a proper function
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
Add a global variable, cgit_max_lock_attemps, to avoid the possibility of
infinite loops when failing to acquire a lockfile. This could happen on
broken setups or under crazy server load.
Incidentally, this also fixes a lurking bug in cache_lock() where an
uninitialized returnvalue was used.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
This closes the door for unneccessary calls to cgit_fill_cache().
Noticed by Linus.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
|
|
This enables internal caching of page output.
Page requests are split into four groups:
1) repo listing (front page)
2) repo summary
3) repo pages w/symbolic references in query string
4) repo pages w/constant sha1's in query string
Each group has a TTL specified in minutes. When a page is requested, a cached
filename is stat(2)'ed and st_mtime is compared to time(2). If TTL has expired
(or the file didn't exist), the cached file is regenerated.
When generating a cached file, locking is used to avoid parallell processing
of the request. If multiple processes tries to aquire t |