Compare commits

...

10 Commits

Author SHA1 Message Date
minish 3513337ac7
update dockerfile for new config
also fix glibc error
2023-12-08 14:20:35 -05:00
minish 661f2f14dd
doc update 2023-12-08 14:17:19 -05:00
minish 0d176bca40
--config is now a required switch 2023-12-08 13:08:38 -05:00
minish a0ffd1ddd1
bump rust version 2023-12-07 23:38:21 -05:00
minish f5c67c64d7
move default_motd func to a better place 2023-12-07 13:33:19 -05:00
minish a315baa258
config restructure + motd option 2023-12-07 13:31:27 -05:00
minish 2aa97e05b4
fix typo in ViewError::NotFound doc comment 2023-12-07 13:21:13 -05:00
minish 5f8adf023f
lower path traversal warning to an info
the new default log level is warning
so i don't want it to be possible to spam server logs on default config
2023-12-02 15:58:00 -05:00
minish d9f560677a
lol oops 2023-11-10 19:36:43 -05:00
minish 3fa4caad92
config rework ep1
+ misc refactoring and tweaks
2023-11-09 21:22:02 -05:00
13 changed files with 1037 additions and 351 deletions

895
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package] [package]
name = "breeze" name = "breeze"
version = "0.1.4" version = "0.1.5"
edition = "2021" edition = "2021"
[dependencies] [dependencies]
@ -15,6 +15,11 @@ rand = "0.8.5"
async-recursion = "1.0.0" async-recursion = "1.0.0"
walkdir = "2" walkdir = "2"
futures = "0.3" futures = "0.3"
log = "0.4" tracing = "0.1"
pretty_env_logger = "0.5.0" tracing-subscriber = "0.3"
archived = { path = "./archived" } archived = { path = "./archived" }
xxhash-rust = { version = "0.8.7", features = ["xxh3"] }
serde = { version = "1.0.189", features = ["derive"] }
toml = "0.8.2"
clap = { version = "4.4.6", features = ["derive"] }
serde_with = "3.4.0"

View File

@ -1,12 +1,12 @@
# builder # builder
FROM rust:1.73 as builder FROM rust:1.74 as builder
WORKDIR /usr/src/breeze WORKDIR /usr/src/breeze
COPY . . COPY . .
RUN cargo install --path . RUN cargo install --path .
# runner # runner
FROM debian:bullseye-slim FROM debian:bookworm-slim
RUN apt-get update && rm -rf /var/lib/apt/lists/* RUN apt-get update && rm -rf /var/lib/apt/lists/*
@ -16,4 +16,4 @@ RUN useradd -m runner
USER runner USER runner
EXPOSE 8000 EXPOSE 8000
CMD [ "breeze" ] CMD [ "breeze", "--config", "/etc/breeze.toml" ]

View File

@ -1,6 +1,8 @@
# breeze # breeze
breeze is a simple, performant file upload server. breeze is a simple, performant file upload server.
The primary instance is https://picture.wtf.
## Features ## Features
Compared to the old Express.js backend, breeze has Compared to the old Express.js backend, breeze has
- Streamed uploading - Streamed uploading
@ -15,10 +17,10 @@ I wrote breeze with the intention of running it in a container, but it runs just
Either way, you need to start off by cloning the Git repository. Either way, you need to start off by cloning the Git repository.
```bash ```bash
git clone https://git.min.rip/minish/breeze.git git clone https://git.min.rip/min/breeze.git
``` ```
To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below. To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below. You can start it using `docker compose up -d`.
``` ```
version: '3.6' version: '3.6'
@ -29,20 +31,15 @@ services:
volumes: volumes:
- /srv/uploads:/data - /srv/uploads:/data
- ./breeze.toml:/etc/breeze.toml
ports: ports:
- 8000:8000 - 8000:8000
environment:
- BRZ_BASE_URL=http://127.0.0.1:8000
- BRZ_SAVE_PATH=/data
- BRZ_UPLOAD_KEY=hiiiiiiii
- BRZ_CACHE_UPL_MAX_LENGTH=134217728 # allow files up to ~134 MiB to be cached
- BRZ_CACHE_UPL_LIFETIME=1800 # let uploads stay in cache for 30 minutes
- BRZ_CACHE_SCAN_FREQ=60 # scan the cache for expired files if more than 60 seconds have passed since the last scan
- BRZ_CACHE_MEM_CAPACITY=4294967296 # allow 4 GiB of data to be in the cache at once
``` ```
For this configuration, it is expected that there is a clone of the Git repository in the `./breeze` folder. You can start it using `docker compose up -d`. For this configuration, it is expected that:
* there is a clone of the Git repository in the `./breeze` folder.
* there is a `breeze.toml` config file in current directory
* there is a directory at `/srv/uploads` for storing uploads
It can also be installed directly if you have the Rust toolchain installed: It can also be installed directly if you have the Rust toolchain installed:
```bash ```bash
@ -51,15 +48,59 @@ cargo install --path .
## Usage ## Usage
### Hosting ### Hosting
Configuration is read through environment variables, because I wanted to run this using Docker Compose. Configuration is read through a toml file.
```
BRZ_BASE_URL - base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png) By default it'll try to read `./breeze.toml`, but you can specify a different path using the `-c`/`--config` command line switch.
BRZ_SAVE_PATH - this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\brzuploads)
BRZ_UPLOAD_KEY (optional) - if not empty, the key you specify will be required to upload new files. Here is an example config file:
BRZ_CACHE_UPL_MAX_LENGTH - this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB) ```toml
BRZ_CACHE_UPL_LIFETIME - this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute) [engine]
BRZ_CACHE_SCAN_FREQ - this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute) # The base URL that the HTTP server will be accessible on.
BRZ_CACHE_MEM_CAPACITY - this is the amount of memory the cache will hold before dropping entries # This is used for formatting upload URLs.
# Setting it to "https://picture.wtf" would result in
# upload urls of "https://picture.wtf/p/abcdef.png", etc.
base_url = "http://127.0.0.1:8000"
# The location that uploads will be saved to.
# It should be a path to a directory on disk that you can write to.
save_path = "/data"
# OPTIONAL - If set, the static key specified will be required to upload new files.
# If it is not set, no key will be required.
upload_key = "hiiiiiiii"
# OPTIONAL - specifies what to show when the site is visited on http
# It is sent with text/plain content type.
# There are two variables you can use:
# %uplcount% - total number of uploads present on the server
# %version% - current breeze version (e.g. 0.1.5)
motd = "my image host, currently hosting %uplcount% files"
[engine.cache]
# The file size (in bytes) that a file must be under
# to get cached.
max_length = 134_217_728
# How long a cached upload will remain cached. (in seconds)
upload_lifetime = 1800
# How often the cache will be checked for expired uploads.
# It is not a continuous scan, and only is triggered upon a cache operation.
scan_freq = 60
# How much memory (in bytes) the cache is allowed to consume.
mem_capacity = 4_294_967_295
[http]
# The address that the HTTP server will listen on. (ip:port)
# Use 0.0.0.0 as the IP to listen publicly, 127.0.0.1 only lets your
# computer access it
listen_on = "127.0.0.1:8000"
[logger]
# OPTIONAL - the current log level.
# Default level is warn.
level = "warn"
``` ```
### Uploading ### Uploading

7
archived/Cargo.lock generated
View File

@ -8,6 +8,7 @@ version = "0.2.0"
dependencies = [ dependencies = [
"bytes", "bytes",
"once_cell", "once_cell",
"rustc-hash",
] ]
[[package]] [[package]]
@ -21,3 +22,9 @@ name = "once_cell"
version = "1.3.1" version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1c601810575c99596d4afc46f78a678c80105117c379eb3650cf99b8a21ce5b" checksum = "b1c601810575c99596d4afc46f78a678c80105117c379eb3650cf99b8a21ce5b"
[[package]]
name = "rustc-hash"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2"

View File

@ -6,4 +6,4 @@ license = "MIT"
[dependencies] [dependencies]
bytes = "1.3.0" bytes = "1.3.0"
once_cell = "1.3.1" once_cell = "1.3.1"

View File

@ -29,7 +29,11 @@ impl Archive {
} }
} */ } */
pub fn with_full_scan(full_scan_frequency: Duration, entry_lifetime: Duration, capacity: usize) -> Self { pub fn with_full_scan(
full_scan_frequency: Duration,
entry_lifetime: Duration,
capacity: usize,
) -> Self {
Self { Self {
cache_table: HashMap::with_capacity(256), cache_table: HashMap::with_capacity(256),
full_scan_frequency: Some(full_scan_frequency), full_scan_frequency: Some(full_scan_frequency),
@ -67,11 +71,7 @@ impl Archive {
.map(|cache_entry| &cache_entry.value) .map(|cache_entry| &cache_entry.value)
} }
pub fn get_or_insert<F>( pub fn get_or_insert<F>(&mut self, key: String, factory: F) -> &Bytes
&mut self,
key: String,
factory: F,
) -> &Bytes
where where
F: Fn() -> Bytes, F: Fn() -> Bytes,
{ {
@ -87,15 +87,15 @@ impl Archive {
&occupied.into_mut().value &occupied.into_mut().value
} }
Entry::Vacant(vacant) => &vacant.insert(CacheEntry::new(factory(), self.entry_lifetime)).value, Entry::Vacant(vacant) => {
&vacant
.insert(CacheEntry::new(factory(), self.entry_lifetime))
.value
}
} }
} }
pub fn insert( pub fn insert(&mut self, key: String, value: Bytes) -> Option<Bytes> {
&mut self,
key: String,
value: Bytes,
) -> Option<Bytes> {
let now = SystemTime::now(); let now = SystemTime::now();
self.try_full_scan_expired_items(now); self.try_full_scan_expired_items(now);
@ -144,7 +144,7 @@ impl Archive {
Some(()) Some(())
} }
None => None None => None,
} }
} }

81
src/config.rs Normal file
View File

@ -0,0 +1,81 @@
use std::{path::PathBuf, time::Duration};
use serde::Deserialize;
use serde_with::{serde_as, DisplayFromStr, DurationSeconds};
use tracing_subscriber::filter::LevelFilter;
#[derive(Deserialize)]
pub struct Config {
pub engine: EngineConfig,
pub http: HttpConfig,
pub logger: LoggerConfig,
}
fn default_motd() -> String {
"breeze file server (v%version%) - currently hosting %uplcount% files".to_string()
}
#[derive(Deserialize)]
pub struct EngineConfig {
/// The url that the instance of breeze is meant to be accessed from.
///
/// ex: https://picture.wtf would generate links like https://picture.wtf/p/abcdef.png
pub base_url: String,
/// Location on disk the uploads are to be saved to
pub save_path: PathBuf,
/// Authentication key for new uploads, will be required if this is specified. (optional)
#[serde(default)]
pub upload_key: String,
/// Configuration for cache system
pub cache: CacheConfig,
/// Motd displayed when the server's index page is visited.
///
/// This isn't explicitly engine-related but the engine is what gets passed to routes,
/// so it is here for now.
#[serde(default = "default_motd")]
pub motd: String,
}
#[serde_as]
#[derive(Deserialize)]
pub struct CacheConfig {
/// The maximum length in bytes that a file can be
/// before it skips cache (in seconds)
pub max_length: usize,
/// The amount of time a file can last inside the cache (in seconds)
#[serde_as(as = "DurationSeconds")]
pub upload_lifetime: Duration,
/// How often the cache is to be scanned for
/// expired entries (in seconds)
#[serde_as(as = "DurationSeconds")]
pub scan_freq: Duration,
/// How much memory the cache is allowed to use (in bytes)
pub mem_capacity: usize,
}
#[derive(Deserialize)]
pub struct HttpConfig {
pub listen_on: String,
}
fn default_level_filter() -> LevelFilter {
LevelFilter::WARN
}
#[serde_as]
#[derive(Deserialize)]
pub struct LoggerConfig {
/// Minimum level a log must be for it to be shown.
/// This defaults to "warn" if not specified.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "default_level_filter")]
// yes... kind of a hack but serde doesn't have anything better
pub level: LevelFilter,
}

View File

@ -2,7 +2,6 @@ use std::{
ffi::OsStr, ffi::OsStr,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::atomic::{AtomicUsize, Ordering}, sync::atomic::{AtomicUsize, Ordering},
time::Duration,
}; };
use archived::Archive; use archived::Archive;
@ -18,70 +17,69 @@ use tokio::{
}, },
}; };
use tokio_stream::StreamExt; use tokio_stream::StreamExt;
use tracing::{debug, error, info};
use walkdir::WalkDir; use walkdir::WalkDir;
use crate::view::{ViewError, ViewSuccess}; use crate::{
config,
view::{ViewError, ViewSuccess},
};
/// breeze engine! this is the core of everything
pub struct Engine { pub struct Engine {
// state /// The in-memory cache that cached uploads are stored in.
cache: RwLock<Archive>, // in-memory cache cache: RwLock<Archive>,
pub upl_count: AtomicUsize, // cached count of uploaded files
// config /// Cached count of uploaded files.
pub base_url: String, // base url for formatting upload urls pub upl_count: AtomicUsize,
save_path: PathBuf, // where uploads are saved to disk
pub upload_key: String, // authorisation key for uploading new files
cache_max_length: usize, // if an upload is bigger than this size, it won't be cached /// Engine configuration
pub cfg: config::EngineConfig,
} }
impl Engine { impl Engine {
// create a new engine /// Creates a new instance of the breeze engine.
pub fn new( pub fn new(cfg: config::EngineConfig) -> Self {
base_url: String,
save_path: PathBuf,
upload_key: String,
cache_max_length: usize,
cache_lifetime: Duration,
cache_full_scan_freq: Duration, // how often the cache will be scanned for expired items
cache_mem_capacity: usize,
) -> Self {
Self { Self {
cache: RwLock::new(Archive::with_full_scan( cache: RwLock::new(Archive::with_full_scan(
cache_full_scan_freq, cfg.cache.scan_freq,
cache_lifetime, cfg.cache.upload_lifetime,
cache_mem_capacity, cfg.cache.mem_capacity,
)), )),
upl_count: AtomicUsize::new(WalkDir::new(&save_path).min_depth(1).into_iter().count()), // count the amount of files in the save path and initialise our cached count with it upl_count: AtomicUsize::new(
WalkDir::new(&cfg.save_path)
.min_depth(1)
.into_iter()
.count(),
), // count the amount of files in the save path and initialise our cached count with it
base_url, cfg,
save_path,
upload_key,
cache_max_length,
} }
} }
/// Returns if an upload would be able to be cached
#[inline(always)]
fn will_use_cache(&self, length: usize) -> bool { fn will_use_cache(&self, length: usize) -> bool {
length <= self.cache_max_length length <= self.cfg.cache.max_length
} }
// checks in cache or disk for an upload using a pathbuf /// Check if an upload exists in cache or on disk
pub async fn upload_exists(&self, path: &Path) -> bool { pub async fn upload_exists(&self, path: &Path) -> bool {
let cache = self.cache.read().await; let cache = self.cache.read().await;
// check if upload is in cache // extract file name, since that's what cache uses
let name = path let name = path
.file_name() .file_name()
.and_then(OsStr::to_str) .and_then(OsStr::to_str)
.unwrap_or_default() .unwrap_or_default()
.to_string(); .to_string();
// check in cache
if cache.contains_key(&name) { if cache.contains_key(&name) {
return true; return true;
} }
// check if upload is on disk // check on disk
if path.exists() { if path.exists() {
return true; return true;
} }
@ -89,7 +87,10 @@ impl Engine {
return false; return false;
} }
// generate a new save path for an upload /// Generate a new save path for an upload.
///
/// This will call itself recursively if it picks
/// a name that's already used. (it is rare)
#[async_recursion::async_recursion] #[async_recursion::async_recursion]
pub async fn gen_path(&self, original_path: &PathBuf) -> PathBuf { pub async fn gen_path(&self, original_path: &PathBuf) -> PathBuf {
// generate a 6-character alphanumeric string // generate a 6-character alphanumeric string
@ -107,7 +108,7 @@ impl Engine {
.to_string(); .to_string();
// path on disk // path on disk
let mut path = self.save_path.clone(); let mut path = self.cfg.save_path.clone();
path.push(&id); path.push(&id);
path.set_extension(original_extension); path.set_extension(original_extension);
@ -119,7 +120,8 @@ impl Engine {
} }
} }
// process an upload. this is called by the new route /// Process an upload.
/// This is called by the /new route.
pub async fn process_upload( pub async fn process_upload(
&self, &self,
path: PathBuf, path: PathBuf,
@ -193,25 +195,20 @@ impl Engine {
self.upl_count.fetch_add(1, Ordering::Relaxed); self.upl_count.fetch_add(1, Ordering::Relaxed);
} }
// read an upload from cache, if it exists /// Read an upload from cache, if it exists.
// previously, this would lock the cache as writable to renew the upload's cache lifespan ///
// locking the cache as readable allows multiple concurrent readers, which allows me to handle multiple views concurrently /// Previously, this would lock the cache as
/// writable to renew the upload's cache lifespan.
/// Locking the cache as readable allows multiple concurrent
/// readers though, which allows me to handle multiple views concurrently.
async fn read_cached_upload(&self, name: &String) -> Option<Bytes> { async fn read_cached_upload(&self, name: &String) -> Option<Bytes> {
let cache = self.cache.read().await; let cache = self.cache.read().await;
if !cache.contains_key(name) {
return None;
}
// fetch upload data from cache // fetch upload data from cache
let data = cache cache.get(name).map(ToOwned::to_owned)
.get(name)
.expect("failed to read get upload data from cache")
.to_owned();
Some(data)
} }
/// Reads an upload, from cache or on disk.
pub async fn get_upload(&self, original_path: &Path) -> Result<ViewSuccess, ViewError> { pub async fn get_upload(&self, original_path: &Path) -> Result<ViewSuccess, ViewError> {
// extract upload file name // extract upload file name
let name = original_path let name = original_path
@ -221,7 +218,7 @@ impl Engine {
.to_string(); .to_string();
// path on disk // path on disk
let mut path = self.save_path.clone(); let mut path = self.cfg.save_path.clone();
path.push(&name); path.push(&name);
// check if the upload exists, if not then 404 // check if the upload exists, if not then 404
@ -233,18 +230,24 @@ impl Engine {
let cached_data = self.read_cached_upload(&name).await; let cached_data = self.read_cached_upload(&name).await;
if let Some(data) = cached_data { if let Some(data) = cached_data {
info!("got upload from cache!!"); info!("got upload from cache!");
Ok(ViewSuccess::FromCache(data)) Ok(ViewSuccess::FromCache(data))
} else { } else {
// we already know the upload exists by now so this is okay
let mut file = File::open(&path).await.unwrap(); let mut file = File::open(&path).await.unwrap();
// read upload length from disk // read upload length from disk
let length = file let metadata = file.metadata().await;
.metadata()
.await if metadata.is_err() {
.expect("failed to read upload file metadata") error!("failed to get upload file metadata!");
.len() as usize; return Err(ViewError::InternalServerError);
}
let metadata = metadata.unwrap();
let length = metadata.len() as usize;
debug!("read upload from disk, size = {}", length); debug!("read upload from disk, size = {}", length);

View File

@ -2,20 +2,20 @@ use std::sync::{atomic::Ordering, Arc};
use axum::extract::State; use axum::extract::State;
// show index status page with amount of uploaded files /// Show index status page with amount of uploaded files
pub async fn index(State(engine): State<Arc<crate::engine::Engine>>) -> String { pub async fn index(State(engine): State<Arc<crate::engine::Engine>>) -> String {
let count = engine.upl_count.load(Ordering::Relaxed); let count = engine.upl_count.load(Ordering::Relaxed);
format!("minish's image host, currently hosting {} files", count) let motd = engine.cfg.motd.clone();
motd
.replace("%version%", env!("CARGO_PKG_VERSION"))
.replace("%uplcount%", &count.to_string())
} }
// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!(
"User-Agent: *\n",
"Disallow: /p/*\n",
"Allow: /\n"
);
pub async fn robots_txt() -> &'static str { pub async fn robots_txt() -> &'static str {
/// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!("User-Agent: *\n", "Disallow: /p/*\n", "Allow: /\n");
ROBOTS_TXT ROBOTS_TXT
} }

View File

@ -1,63 +1,56 @@
use std::{env, path::PathBuf, sync::Arc, time::Duration}; use std::{path::PathBuf, sync::Arc};
extern crate axum; extern crate axum;
#[macro_use] use clap::Parser;
extern crate log;
use engine::Engine; use engine::Engine;
use axum::{ use axum::{
routing::{get, post}, routing::{get, post},
Router, Router,
}; };
use tokio::signal; use tokio::{fs, signal};
use tracing::{info, warn};
mod config;
mod engine; mod engine;
mod index; mod index;
mod new; mod new;
mod view; mod view;
#[derive(Parser, Debug)]
struct Args {
/// The path to configuration file
#[arg(short, long, value_name = "file")]
config: PathBuf,
}
#[tokio::main] #[tokio::main]
async fn main() { async fn main() {
// initialise logger // read & parse args
pretty_env_logger::init(); let args = Args::parse();
// read env vars // read & parse config
let base_url = env::var("BRZ_BASE_URL").expect("missing BRZ_BASE_URL! base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)"); let config_str = fs::read_to_string(args.config)
let save_path = env::var("BRZ_SAVE_PATH").expect("missing BRZ_SAVE_PATH! this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\\brzuploads)"); .await
let upload_key = env::var("BRZ_UPLOAD_KEY").unwrap_or_default(); .expect("failed to read config file! make sure it exists and you have read permissions");
let cache_max_length = env::var("BRZ_CACHE_UPL_MAX_LENGTH").expect("missing BRZ_CACHE_UPL_MAX_LENGTH! this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)");
let cache_upl_lifetime = env::var("BRZ_CACHE_UPL_LIFETIME").expect("missing BRZ_CACHE_UPL_LIFETIME! this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)");
let cache_scan_freq = env::var("BRZ_CACHE_SCAN_FREQ").expect("missing BRZ_CACHE_SCAN_FREQ! this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)");
let cache_mem_capacity = env::var("BRZ_CACHE_MEM_CAPACITY").expect("missing BRZ_CACHE_MEM_CAPACITY! this is the amount of memory the cache will hold before dropping entries");
// parse env vars let cfg: config::Config = toml::from_str(&config_str).expect("invalid config! check that you have included all required options and structured it properly (no config options expecting a number getting a string, etc.)");
let save_path = PathBuf::from(save_path);
let cache_max_length = cache_max_length.parse::<usize>().expect("failed parsing BRZ_CACHE_UPL_MAX_LENGTH! it should be a positive number without any separators");
let cache_upl_lifetime = Duration::from_secs(cache_upl_lifetime.parse::<u64>().expect("failed parsing BRZ_CACHE_UPL_LIFETIME! it should be a positive number without any separators"));
let cache_scan_freq = Duration::from_secs(cache_scan_freq.parse::<u64>().expect("failed parsing BRZ_CACHE_SCAN_FREQ! it should be a positive number without any separators"));
let cache_mem_capacity = cache_mem_capacity.parse::<usize>().expect("failed parsing BRZ_CACHE_MEM_CAPACITY! it should be a positive number without any separators");
if !save_path.exists() || !save_path.is_dir() { tracing_subscriber::fmt()
.with_max_level(cfg.logger.level)
.init();
if !cfg.engine.save_path.exists() || !cfg.engine.save_path.is_dir() {
panic!("the save path does not exist or is not a directory! this is invalid"); panic!("the save path does not exist or is not a directory! this is invalid");
} }
if upload_key.is_empty() { if cfg.engine.upload_key.is_empty() {
// i would prefer this to be a warning but the default log level hides those warn!("engine upload_key is empty! no key will be required for uploading new files");
error!("upload key (BRZ_UPLOAD_KEY) is empty! no key will be required for uploading new files");
} }
// create engine // create engine
let engine = Engine::new( let engine = Engine::new(cfg.engine);
base_url,
save_path,
upload_key,
cache_max_length,
cache_upl_lifetime,
cache_scan_freq,
cache_mem_capacity,
);
// build main router // build main router
let app = Router::new() let app = Router::new()
@ -68,11 +61,16 @@ async fn main() {
.with_state(Arc::new(engine)); .with_state(Arc::new(engine));
// start web server // start web server
axum::Server::bind(&"0.0.0.0:8000".parse().unwrap()) axum::Server::bind(
.serve(app.into_make_service()) &cfg.http
.with_graceful_shutdown(shutdown_signal()) .listen_on
.await .parse()
.unwrap(); .expect("failed to parse listen_on address"),
)
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal())
.await
.expect("failed to start server");
} }
async fn shutdown_signal() { async fn shutdown_signal() {
@ -99,4 +97,4 @@ async fn shutdown_signal() {
} }
info!("shutting down!"); info!("shutting down!");
} }

View File

@ -6,17 +6,21 @@ use axum::{
}; };
use hyper::{header, HeaderMap, StatusCode}; use hyper::{header, HeaderMap, StatusCode};
/// The request handler for the /new path.
/// This handles all new uploads.
#[axum::debug_handler] #[axum::debug_handler]
pub async fn new( pub async fn new(
State(engine): State<Arc<crate::engine::Engine>>, State(engine): State<Arc<crate::engine::Engine>>,
headers: HeaderMap,
Query(params): Query<HashMap<String, String>>, Query(params): Query<HashMap<String, String>>,
headers: HeaderMap,
stream: BodyStream, stream: BodyStream,
) -> Result<String, StatusCode> { ) -> Result<String, StatusCode> {
let key = params.get("key"); let key = params.get("key");
const EMPTY_STRING: &String = &String::new();
// check upload key, if i need to // check upload key, if i need to
if !engine.upload_key.is_empty() && key.unwrap_or(&String::new()) != &engine.upload_key { if !engine.cfg.upload_key.is_empty() && key.unwrap_or(EMPTY_STRING) != &engine.cfg.upload_key {
return Err(StatusCode::FORBIDDEN); return Err(StatusCode::FORBIDDEN);
} }
@ -36,7 +40,7 @@ pub async fn new(
.unwrap_or_default() .unwrap_or_default()
.to_string(); .to_string();
let url = format!("{}/p/{}", engine.base_url, name); let url = format!("{}/p/{}", engine.cfg.base_url, name);
// read and parse content-length, and if it fails just assume it's really high so it doesn't cache // read and parse content-length, and if it fails just assume it's really high so it doesn't cache
let content_length = headers let content_length = headers

View File

@ -13,22 +13,43 @@ use bytes::Bytes;
use hyper::{http::HeaderValue, StatusCode}; use hyper::{http::HeaderValue, StatusCode};
use tokio::{fs::File, runtime::Handle}; use tokio::{fs::File, runtime::Handle};
use tokio_util::io::ReaderStream; use tokio_util::io::ReaderStream;
use tracing::{error, debug, info};
/// Responses for a successful view operation
pub enum ViewSuccess { pub enum ViewSuccess {
/// A file read from disk, suitable for larger files.
///
/// The file provided will be streamed from disk and
/// back to the viewer.
///
/// This is only ever used if a file exceeds the
/// cache's maximum file size.
FromDisk(File), FromDisk(File),
/// A file read from in-memory cache, best for smaller files.
///
/// The file is taken from the cache in its entirety
/// and sent back to the viewer.
///
/// If a file can be fit into cache, this will be
/// used even if it's read from disk.
FromCache(Bytes), FromCache(Bytes),
} }
/// Responses for a failed view operation
pub enum ViewError { pub enum ViewError {
NotFound, // 404 /// Will send status code 404 with a plaintext "not found" message.
InternalServerError, // 500 NotFound,
/// Will send status code 500 with a plaintext "internal server error" message.
InternalServerError,
} }
impl IntoResponse for ViewSuccess { impl IntoResponse for ViewSuccess {
fn into_response(self) -> Response { fn into_response(self) -> Response {
match self { match self {
ViewSuccess::FromDisk(file) => { ViewSuccess::FromDisk(file) => {
// get handle to current runtime // get handle to current tokio runtime
// i use this to block on futures here (not async) // i use this to block on futures here (not async)
let handle = Handle::current(); let handle = Handle::current();
let _ = handle.enter(); let _ = handle.enter();
@ -88,24 +109,21 @@ impl IntoResponse for ViewSuccess {
impl IntoResponse for ViewError { impl IntoResponse for ViewError {
fn into_response(self) -> Response { fn into_response(self) -> Response {
match self { match self {
ViewError::NotFound => { ViewError::NotFound => (
// convert string into response, change status code StatusCode::NOT_FOUND,
let mut res = "not found!".into_response(); "not found!"
*res.status_mut() = StatusCode::NOT_FOUND; ).into_response(),
res ViewError::InternalServerError => (
} StatusCode::INTERNAL_SERVER_ERROR,
ViewError::InternalServerError => { "internal server error!"
// convert string into response, change status code ).into_response(),
let mut res = "internal server error!".into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
} }
} }
} }
/// The request handler for /p/* path.
/// All file views are handled here.
#[axum::debug_handler] #[axum::debug_handler]
pub async fn view( pub async fn view(
State(engine): State<Arc<crate::engine::Engine>>, State(engine): State<Arc<crate::engine::Engine>>,
@ -116,7 +134,7 @@ pub async fn view(
.components() .components()
.any(|x| !matches!(x, Component::Normal(_))) .any(|x| !matches!(x, Component::Normal(_)))
{ {
warn!("a request attempted path traversal"); info!("a request attempted path traversal");
return Err(ViewError::NotFound); return Err(ViewError::NotFound);
} }