Compare commits

...

10 Commits

Author SHA1 Message Date
minish 3513337ac7
update dockerfile for new config
also fix glibc error
2023-12-08 14:20:35 -05:00
minish 661f2f14dd
doc update 2023-12-08 14:17:19 -05:00
minish 0d176bca40
--config is now a required switch 2023-12-08 13:08:38 -05:00
minish a0ffd1ddd1
bump rust version 2023-12-07 23:38:21 -05:00
minish f5c67c64d7
move default_motd func to a better place 2023-12-07 13:33:19 -05:00
minish a315baa258
config restructure + motd option 2023-12-07 13:31:27 -05:00
minish 2aa97e05b4
fix typo in ViewError::NotFound doc comment 2023-12-07 13:21:13 -05:00
minish 5f8adf023f
lower path traversal warning to an info
the new default log level is warning
so i don't want it to be possible to spam server logs on default config
2023-12-02 15:58:00 -05:00
minish d9f560677a
lol oops 2023-11-10 19:36:43 -05:00
minish 3fa4caad92
config rework ep1
+ misc refactoring and tweaks
2023-11-09 21:22:02 -05:00
13 changed files with 1037 additions and 351 deletions

895
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package]
name = "breeze"
version = "0.1.4"
version = "0.1.5"
edition = "2021"
[dependencies]
@ -15,6 +15,11 @@ rand = "0.8.5"
async-recursion = "1.0.0"
walkdir = "2"
futures = "0.3"
log = "0.4"
pretty_env_logger = "0.5.0"
tracing = "0.1"
tracing-subscriber = "0.3"
archived = { path = "./archived" }
xxhash-rust = { version = "0.8.7", features = ["xxh3"] }
serde = { version = "1.0.189", features = ["derive"] }
toml = "0.8.2"
clap = { version = "4.4.6", features = ["derive"] }
serde_with = "3.4.0"

View File

@ -1,12 +1,12 @@
# builder
FROM rust:1.73 as builder
FROM rust:1.74 as builder
WORKDIR /usr/src/breeze
COPY . .
RUN cargo install --path .
# runner
FROM debian:bullseye-slim
FROM debian:bookworm-slim
RUN apt-get update && rm -rf /var/lib/apt/lists/*
@ -16,4 +16,4 @@ RUN useradd -m runner
USER runner
EXPOSE 8000
CMD [ "breeze" ]
CMD [ "breeze", "--config", "/etc/breeze.toml" ]

View File

@ -1,6 +1,8 @@
# breeze
breeze is a simple, performant file upload server.
The primary instance is https://picture.wtf.
## Features
Compared to the old Express.js backend, breeze has
- Streamed uploading
@ -15,10 +17,10 @@ I wrote breeze with the intention of running it in a container, but it runs just
Either way, you need to start off by cloning the Git repository.
```bash
git clone https://git.min.rip/minish/breeze.git
git clone https://git.min.rip/min/breeze.git
```
To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below.
To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below. You can start it using `docker compose up -d`.
```
version: '3.6'
@ -29,20 +31,15 @@ services:
volumes:
- /srv/uploads:/data
- ./breeze.toml:/etc/breeze.toml
ports:
- 8000:8000
environment:
- BRZ_BASE_URL=http://127.0.0.1:8000
- BRZ_SAVE_PATH=/data
- BRZ_UPLOAD_KEY=hiiiiiiii
- BRZ_CACHE_UPL_MAX_LENGTH=134217728 # allow files up to ~134 MiB to be cached
- BRZ_CACHE_UPL_LIFETIME=1800 # let uploads stay in cache for 30 minutes
- BRZ_CACHE_SCAN_FREQ=60 # scan the cache for expired files if more than 60 seconds have passed since the last scan
- BRZ_CACHE_MEM_CAPACITY=4294967296 # allow 4 GiB of data to be in the cache at once
```
For this configuration, it is expected that there is a clone of the Git repository in the `./breeze` folder. You can start it using `docker compose up -d`.
For this configuration, it is expected that:
* there is a clone of the Git repository in the `./breeze` folder.
* there is a `breeze.toml` config file in current directory
* there is a directory at `/srv/uploads` for storing uploads
It can also be installed directly if you have the Rust toolchain installed:
```bash
@ -51,15 +48,59 @@ cargo install --path .
## Usage
### Hosting
Configuration is read through environment variables, because I wanted to run this using Docker Compose.
```
BRZ_BASE_URL - base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)
BRZ_SAVE_PATH - this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\brzuploads)
BRZ_UPLOAD_KEY (optional) - if not empty, the key you specify will be required to upload new files.
BRZ_CACHE_UPL_MAX_LENGTH - this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)
BRZ_CACHE_UPL_LIFETIME - this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)
BRZ_CACHE_SCAN_FREQ - this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)
BRZ_CACHE_MEM_CAPACITY - this is the amount of memory the cache will hold before dropping entries
Configuration is read through a toml file.
By default it'll try to read `./breeze.toml`, but you can specify a different path using the `-c`/`--config` command line switch.
Here is an example config file:
```toml
[engine]
# The base URL that the HTTP server will be accessible on.
# This is used for formatting upload URLs.
# Setting it to "https://picture.wtf" would result in
# upload urls of "https://picture.wtf/p/abcdef.png", etc.
base_url = "http://127.0.0.1:8000"
# The location that uploads will be saved to.
# It should be a path to a directory on disk that you can write to.
save_path = "/data"
# OPTIONAL - If set, the static key specified will be required to upload new files.
# If it is not set, no key will be required.
upload_key = "hiiiiiiii"
# OPTIONAL - specifies what to show when the site is visited on http
# It is sent with text/plain content type.
# There are two variables you can use:
# %uplcount% - total number of uploads present on the server
# %version% - current breeze version (e.g. 0.1.5)
motd = "my image host, currently hosting %uplcount% files"
[engine.cache]
# The file size (in bytes) that a file must be under
# to get cached.
max_length = 134_217_728
# How long a cached upload will remain cached. (in seconds)
upload_lifetime = 1800
# How often the cache will be checked for expired uploads.
# It is not a continuous scan, and only is triggered upon a cache operation.
scan_freq = 60
# How much memory (in bytes) the cache is allowed to consume.
mem_capacity = 4_294_967_295
[http]
# The address that the HTTP server will listen on. (ip:port)
# Use 0.0.0.0 as the IP to listen publicly, 127.0.0.1 only lets your
# computer access it
listen_on = "127.0.0.1:8000"
[logger]
# OPTIONAL - the current log level.
# Default level is warn.
level = "warn"
```
### Uploading

7
archived/Cargo.lock generated
View File

@ -8,6 +8,7 @@ version = "0.2.0"
dependencies = [
"bytes",
"once_cell",
"rustc-hash",
]
[[package]]
@ -21,3 +22,9 @@ name = "once_cell"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1c601810575c99596d4afc46f78a678c80105117c379eb3650cf99b8a21ce5b"
[[package]]
name = "rustc-hash"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2"

View File

@ -6,4 +6,4 @@ license = "MIT"
[dependencies]
bytes = "1.3.0"
once_cell = "1.3.1"
once_cell = "1.3.1"

View File

@ -29,7 +29,11 @@ impl Archive {
}
} */
pub fn with_full_scan(full_scan_frequency: Duration, entry_lifetime: Duration, capacity: usize) -> Self {
pub fn with_full_scan(
full_scan_frequency: Duration,
entry_lifetime: Duration,
capacity: usize,
) -> Self {
Self {
cache_table: HashMap::with_capacity(256),
full_scan_frequency: Some(full_scan_frequency),
@ -67,11 +71,7 @@ impl Archive {
.map(|cache_entry| &cache_entry.value)
}
pub fn get_or_insert<F>(
&mut self,
key: String,
factory: F,
) -> &Bytes
pub fn get_or_insert<F>(&mut self, key: String, factory: F) -> &Bytes
where
F: Fn() -> Bytes,
{
@ -87,15 +87,15 @@ impl Archive {
&occupied.into_mut().value
}
Entry::Vacant(vacant) => &vacant.insert(CacheEntry::new(factory(), self.entry_lifetime)).value,
Entry::Vacant(vacant) => {
&vacant
.insert(CacheEntry::new(factory(), self.entry_lifetime))
.value
}
}
}
pub fn insert(
&mut self,
key: String,
value: Bytes,
) -> Option<Bytes> {
pub fn insert(&mut self, key: String, value: Bytes) -> Option<Bytes> {
let now = SystemTime::now();
self.try_full_scan_expired_items(now);
@ -144,7 +144,7 @@ impl Archive {
Some(())
}
None => None
None => None,
}
}

81
src/config.rs Normal file
View File

@ -0,0 +1,81 @@
use std::{path::PathBuf, time::Duration};
use serde::Deserialize;
use serde_with::{serde_as, DisplayFromStr, DurationSeconds};
use tracing_subscriber::filter::LevelFilter;
#[derive(Deserialize)]
pub struct Config {
pub engine: EngineConfig,
pub http: HttpConfig,
pub logger: LoggerConfig,
}
fn default_motd() -> String {
"breeze file server (v%version%) - currently hosting %uplcount% files".to_string()
}
#[derive(Deserialize)]
pub struct EngineConfig {
/// The url that the instance of breeze is meant to be accessed from.
///
/// ex: https://picture.wtf would generate links like https://picture.wtf/p/abcdef.png
pub base_url: String,
/// Location on disk the uploads are to be saved to
pub save_path: PathBuf,
/// Authentication key for new uploads, will be required if this is specified. (optional)
#[serde(default)]
pub upload_key: String,
/// Configuration for cache system
pub cache: CacheConfig,
/// Motd displayed when the server's index page is visited.
///
/// This isn't explicitly engine-related but the engine is what gets passed to routes,
/// so it is here for now.
#[serde(default = "default_motd")]
pub motd: String,
}
#[serde_as]
#[derive(Deserialize)]
pub struct CacheConfig {
/// The maximum length in bytes that a file can be
/// before it skips cache (in seconds)
pub max_length: usize,
/// The amount of time a file can last inside the cache (in seconds)
#[serde_as(as = "DurationSeconds")]
pub upload_lifetime: Duration,
/// How often the cache is to be scanned for
/// expired entries (in seconds)
#[serde_as(as = "DurationSeconds")]
pub scan_freq: Duration,
/// How much memory the cache is allowed to use (in bytes)
pub mem_capacity: usize,
}
#[derive(Deserialize)]
pub struct HttpConfig {
pub listen_on: String,
}
fn default_level_filter() -> LevelFilter {
LevelFilter::WARN
}
#[serde_as]
#[derive(Deserialize)]
pub struct LoggerConfig {
/// Minimum level a log must be for it to be shown.
/// This defaults to "warn" if not specified.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "default_level_filter")]
// yes... kind of a hack but serde doesn't have anything better
pub level: LevelFilter,
}

View File

@ -2,7 +2,6 @@ use std::{
ffi::OsStr,
path::{Path, PathBuf},
sync::atomic::{AtomicUsize, Ordering},
time::Duration,
};
use archived::Archive;
@ -18,70 +17,69 @@ use tokio::{
},
};
use tokio_stream::StreamExt;
use tracing::{debug, error, info};
use walkdir::WalkDir;
use crate::view::{ViewError, ViewSuccess};
use crate::{
config,
view::{ViewError, ViewSuccess},
};
/// breeze engine! this is the core of everything
pub struct Engine {
// state
cache: RwLock<Archive>, // in-memory cache
pub upl_count: AtomicUsize, // cached count of uploaded files
/// The in-memory cache that cached uploads are stored in.
cache: RwLock<Archive>,
// config
pub base_url: String, // base url for formatting upload urls
save_path: PathBuf, // where uploads are saved to disk
pub upload_key: String, // authorisation key for uploading new files
/// Cached count of uploaded files.
pub upl_count: AtomicUsize,
cache_max_length: usize, // if an upload is bigger than this size, it won't be cached
/// Engine configuration
pub cfg: config::EngineConfig,
}
impl Engine {
// create a new engine
pub fn new(
base_url: String,
save_path: PathBuf,
upload_key: String,
cache_max_length: usize,
cache_lifetime: Duration,
cache_full_scan_freq: Duration, // how often the cache will be scanned for expired items
cache_mem_capacity: usize,
) -> Self {
/// Creates a new instance of the breeze engine.
pub fn new(cfg: config::EngineConfig) -> Self {
Self {
cache: RwLock::new(Archive::with_full_scan(
cache_full_scan_freq,
cache_lifetime,
cache_mem_capacity,
cfg.cache.scan_freq,
cfg.cache.upload_lifetime,
cfg.cache.mem_capacity,
)),
upl_count: AtomicUsize::new(WalkDir::new(&save_path).min_depth(1).into_iter().count()), // count the amount of files in the save path and initialise our cached count with it
upl_count: AtomicUsize::new(
WalkDir::new(&cfg.save_path)
.min_depth(1)
.into_iter()
.count(),
), // count the amount of files in the save path and initialise our cached count with it
base_url,
save_path,
upload_key,
cache_max_length,
cfg,
}
}
/// Returns if an upload would be able to be cached
#[inline(always)]
fn will_use_cache(&self, length: usize) -> bool {
length <= self.cache_max_length
length <= self.cfg.cache.max_length
}
// checks in cache or disk for an upload using a pathbuf
/// Check if an upload exists in cache or on disk
pub async fn upload_exists(&self, path: &Path) -> bool {
let cache = self.cache.read().await;
// check if upload is in cache
// extract file name, since that's what cache uses
let name = path
.file_name()
.and_then(OsStr::to_str)
.unwrap_or_default()
.to_string();
// check in cache
if cache.contains_key(&name) {
return true;
}
// check if upload is on disk
// check on disk
if path.exists() {
return true;
}
@ -89,7 +87,10 @@ impl Engine {
return false;
}
// generate a new save path for an upload
/// Generate a new save path for an upload.
///
/// This will call itself recursively if it picks
/// a name that's already used. (it is rare)
#[async_recursion::async_recursion]
pub async fn gen_path(&self, original_path: &PathBuf) -> PathBuf {
// generate a 6-character alphanumeric string
@ -107,7 +108,7 @@ impl Engine {
.to_string();
// path on disk
let mut path = self.save_path.clone();
let mut path = self.cfg.save_path.clone();
path.push(&id);
path.set_extension(original_extension);
@ -119,7 +120,8 @@ impl Engine {
}
}
// process an upload. this is called by the new route
/// Process an upload.
/// This is called by the /new route.
pub async fn process_upload(
&self,
path: PathBuf,
@ -193,25 +195,20 @@ impl Engine {
self.upl_count.fetch_add(1, Ordering::Relaxed);
}
// read an upload from cache, if it exists
// previously, this would lock the cache as writable to renew the upload's cache lifespan
// locking the cache as readable allows multiple concurrent readers, which allows me to handle multiple views concurrently
/// Read an upload from cache, if it exists.
///
/// Previously, this would lock the cache as
/// writable to renew the upload's cache lifespan.
/// Locking the cache as readable allows multiple concurrent
/// readers though, which allows me to handle multiple views concurrently.
async fn read_cached_upload(&self, name: &String) -> Option<Bytes> {
let cache = self.cache.read().await;
if !cache.contains_key(name) {
return None;
}
// fetch upload data from cache
let data = cache
.get(name)
.expect("failed to read get upload data from cache")
.to_owned();
Some(data)
cache.get(name).map(ToOwned::to_owned)
}
/// Reads an upload, from cache or on disk.
pub async fn get_upload(&self, original_path: &Path) -> Result<ViewSuccess, ViewError> {
// extract upload file name
let name = original_path
@ -221,7 +218,7 @@ impl Engine {
.to_string();
// path on disk
let mut path = self.save_path.clone();
let mut path = self.cfg.save_path.clone();
path.push(&name);
// check if the upload exists, if not then 404
@ -233,18 +230,24 @@ impl Engine {
let cached_data = self.read_cached_upload(&name).await;
if let Some(data) = cached_data {
info!("got upload from cache!!");
info!("got upload from cache!");
Ok(ViewSuccess::FromCache(data))
} else {
// we already know the upload exists by now so this is okay
let mut file = File::open(&path).await.unwrap();
// read upload length from disk
let length = file
.metadata()
.await
.expect("failed to read upload file metadata")
.len() as usize;
let metadata = file.metadata().await;
if metadata.is_err() {
error!("failed to get upload file metadata!");
return Err(ViewError::InternalServerError);
}
let metadata = metadata.unwrap();
let length = metadata.len() as usize;
debug!("read upload from disk, size = {}", length);

View File

@ -2,20 +2,20 @@ use std::sync::{atomic::Ordering, Arc};
use axum::extract::State;
// show index status page with amount of uploaded files
/// Show index status page with amount of uploaded files
pub async fn index(State(engine): State<Arc<crate::engine::Engine>>) -> String {
let count = engine.upl_count.load(Ordering::Relaxed);
format!("minish's image host, currently hosting {} files", count)
let motd = engine.cfg.motd.clone();
motd
.replace("%version%", env!("CARGO_PKG_VERSION"))
.replace("%uplcount%", &count.to_string())
}
// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!(
"User-Agent: *\n",
"Disallow: /p/*\n",
"Allow: /\n"
);
pub async fn robots_txt() -> &'static str {
/// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!("User-Agent: *\n", "Disallow: /p/*\n", "Allow: /\n");
ROBOTS_TXT
}
}

View File

@ -1,63 +1,56 @@
use std::{env, path::PathBuf, sync::Arc, time::Duration};
use std::{path::PathBuf, sync::Arc};
extern crate axum;
#[macro_use]
extern crate log;
use clap::Parser;
use engine::Engine;
use axum::{
routing::{get, post},
Router,
};
use tokio::signal;
use tokio::{fs, signal};
use tracing::{info, warn};
mod config;
mod engine;
mod index;
mod new;
mod view;
#[derive(Parser, Debug)]
struct Args {
/// The path to configuration file
#[arg(short, long, value_name = "file")]
config: PathBuf,
}
#[tokio::main]
async fn main() {
// initialise logger
pretty_env_logger::init();
// read & parse args
let args = Args::parse();
// read env vars
let base_url = env::var("BRZ_BASE_URL").expect("missing BRZ_BASE_URL! base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)");
let save_path = env::var("BRZ_SAVE_PATH").expect("missing BRZ_SAVE_PATH! this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\\brzuploads)");
let upload_key = env::var("BRZ_UPLOAD_KEY").unwrap_or_default();
let cache_max_length = env::var("BRZ_CACHE_UPL_MAX_LENGTH").expect("missing BRZ_CACHE_UPL_MAX_LENGTH! this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)");
let cache_upl_lifetime = env::var("BRZ_CACHE_UPL_LIFETIME").expect("missing BRZ_CACHE_UPL_LIFETIME! this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)");
let cache_scan_freq = env::var("BRZ_CACHE_SCAN_FREQ").expect("missing BRZ_CACHE_SCAN_FREQ! this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)");
let cache_mem_capacity = env::var("BRZ_CACHE_MEM_CAPACITY").expect("missing BRZ_CACHE_MEM_CAPACITY! this is the amount of memory the cache will hold before dropping entries");
// read & parse config
let config_str = fs::read_to_string(args.config)
.await
.expect("failed to read config file! make sure it exists and you have read permissions");
// parse env vars
let save_path = PathBuf::from(save_path);
let cache_max_length = cache_max_length.parse::<usize>().expect("failed parsing BRZ_CACHE_UPL_MAX_LENGTH! it should be a positive number without any separators");
let cache_upl_lifetime = Duration::from_secs(cache_upl_lifetime.parse::<u64>().expect("failed parsing BRZ_CACHE_UPL_LIFETIME! it should be a positive number without any separators"));
let cache_scan_freq = Duration::from_secs(cache_scan_freq.parse::<u64>().expect("failed parsing BRZ_CACHE_SCAN_FREQ! it should be a positive number without any separators"));
let cache_mem_capacity = cache_mem_capacity.parse::<usize>().expect("failed parsing BRZ_CACHE_MEM_CAPACITY! it should be a positive number without any separators");
let cfg: config::Config = toml::from_str(&config_str).expect("invalid config! check that you have included all required options and structured it properly (no config options expecting a number getting a string, etc.)");
if !save_path.exists() || !save_path.is_dir() {
tracing_subscriber::fmt()
.with_max_level(cfg.logger.level)
.init();
if !cfg.engine.save_path.exists() || !cfg.engine.save_path.is_dir() {
panic!("the save path does not exist or is not a directory! this is invalid");
}
if upload_key.is_empty() {
// i would prefer this to be a warning but the default log level hides those
error!("upload key (BRZ_UPLOAD_KEY) is empty! no key will be required for uploading new files");
if cfg.engine.upload_key.is_empty() {
warn!("engine upload_key is empty! no key will be required for uploading new files");
}
// create engine
let engine = Engine::new(
base_url,
save_path,
upload_key,
cache_max_length,
cache_upl_lifetime,
cache_scan_freq,
cache_mem_capacity,
);
let engine = Engine::new(cfg.engine);
// build main router
let app = Router::new()
@ -68,11 +61,16 @@ async fn main() {
.with_state(Arc::new(engine));
// start web server
axum::Server::bind(&"0.0.0.0:8000".parse().unwrap())
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal())
.await
.unwrap();
axum::Server::bind(
&cfg.http
.listen_on
.parse()
.expect("failed to parse listen_on address"),
)
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal())
.await
.expect("failed to start server");
}
async fn shutdown_signal() {
@ -99,4 +97,4 @@ async fn shutdown_signal() {
}
info!("shutting down!");
}
}

View File

@ -6,17 +6,21 @@ use axum::{
};
use hyper::{header, HeaderMap, StatusCode};
/// The request handler for the /new path.
/// This handles all new uploads.
#[axum::debug_handler]
pub async fn new(
State(engine): State<Arc<crate::engine::Engine>>,
headers: HeaderMap,
Query(params): Query<HashMap<String, String>>,
headers: HeaderMap,
stream: BodyStream,
) -> Result<String, StatusCode> {
let key = params.get("key");
const EMPTY_STRING: &String = &String::new();
// check upload key, if i need to
if !engine.upload_key.is_empty() && key.unwrap_or(&String::new()) != &engine.upload_key {
if !engine.cfg.upload_key.is_empty() && key.unwrap_or(EMPTY_STRING) != &engine.cfg.upload_key {
return Err(StatusCode::FORBIDDEN);
}
@ -36,7 +40,7 @@ pub async fn new(
.unwrap_or_default()
.to_string();
let url = format!("{}/p/{}", engine.base_url, name);
let url = format!("{}/p/{}", engine.cfg.base_url, name);
// read and parse content-length, and if it fails just assume it's really high so it doesn't cache
let content_length = headers

View File

@ -13,22 +13,43 @@ use bytes::Bytes;
use hyper::{http::HeaderValue, StatusCode};
use tokio::{fs::File, runtime::Handle};
use tokio_util::io::ReaderStream;
use tracing::{error, debug, info};
/// Responses for a successful view operation
pub enum ViewSuccess {
/// A file read from disk, suitable for larger files.
///
/// The file provided will be streamed from disk and
/// back to the viewer.
///
/// This is only ever used if a file exceeds the
/// cache's maximum file size.
FromDisk(File),
/// A file read from in-memory cache, best for smaller files.
///
/// The file is taken from the cache in its entirety
/// and sent back to the viewer.
///
/// If a file can be fit into cache, this will be
/// used even if it's read from disk.
FromCache(Bytes),
}
/// Responses for a failed view operation
pub enum ViewError {
NotFound, // 404
InternalServerError, // 500
/// Will send status code 404 with a plaintext "not found" message.
NotFound,
/// Will send status code 500 with a plaintext "internal server error" message.
InternalServerError,
}
impl IntoResponse for ViewSuccess {
fn into_response(self) -> Response {
match self {
ViewSuccess::FromDisk(file) => {
// get handle to current runtime
// get handle to current tokio runtime
// i use this to block on futures here (not async)
let handle = Handle::current();
let _ = handle.enter();
@ -88,24 +109,21 @@ impl IntoResponse for ViewSuccess {
impl IntoResponse for ViewError {
fn into_response(self) -> Response {
match self {
ViewError::NotFound => {
// convert string into response, change status code
let mut res = "not found!".into_response();
*res.status_mut() = StatusCode::NOT_FOUND;
res
}
ViewError::InternalServerError => {
// convert string into response, change status code
let mut res = "internal server error!".into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
ViewError::NotFound => (
StatusCode::NOT_FOUND,
"not found!"
).into_response(),
ViewError::InternalServerError => (
StatusCode::INTERNAL_SERVER_ERROR,
"internal server error!"
).into_response(),
}
}
}
/// The request handler for /p/* path.
/// All file views are handled here.
#[axum::debug_handler]
pub async fn view(
State(engine): State<Arc<crate::engine::Engine>>,
@ -116,7 +134,7 @@ pub async fn view(
.components()
.any(|x| !matches!(x, Component::Normal(_)))
{
warn!("a request attempted path traversal");
info!("a request attempted path traversal");
return Err(ViewError::NotFound);
}