Compare commits

..

No commits in common. "3513337ac77ff149e5d825b00640b4f905961e5f" and "6deddc3014b351c69b0bc026cf05799f69d04e67" have entirely different histories.

13 changed files with 351 additions and 1037 deletions

895
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package] [package]
name = "breeze" name = "breeze"
version = "0.1.5" version = "0.1.4"
edition = "2021" edition = "2021"
[dependencies] [dependencies]
@ -15,11 +15,6 @@ rand = "0.8.5"
async-recursion = "1.0.0" async-recursion = "1.0.0"
walkdir = "2" walkdir = "2"
futures = "0.3" futures = "0.3"
tracing = "0.1" log = "0.4"
tracing-subscriber = "0.3" pretty_env_logger = "0.5.0"
archived = { path = "./archived" } archived = { path = "./archived" }
xxhash-rust = { version = "0.8.7", features = ["xxh3"] }
serde = { version = "1.0.189", features = ["derive"] }
toml = "0.8.2"
clap = { version = "4.4.6", features = ["derive"] }
serde_with = "3.4.0"

View File

@ -1,12 +1,12 @@
# builder # builder
FROM rust:1.74 as builder FROM rust:1.73 as builder
WORKDIR /usr/src/breeze WORKDIR /usr/src/breeze
COPY . . COPY . .
RUN cargo install --path . RUN cargo install --path .
# runner # runner
FROM debian:bookworm-slim FROM debian:bullseye-slim
RUN apt-get update && rm -rf /var/lib/apt/lists/* RUN apt-get update && rm -rf /var/lib/apt/lists/*
@ -16,4 +16,4 @@ RUN useradd -m runner
USER runner USER runner
EXPOSE 8000 EXPOSE 8000
CMD [ "breeze", "--config", "/etc/breeze.toml" ] CMD [ "breeze" ]

View File

@ -1,8 +1,6 @@
# breeze # breeze
breeze is a simple, performant file upload server. breeze is a simple, performant file upload server.
The primary instance is https://picture.wtf.
## Features ## Features
Compared to the old Express.js backend, breeze has Compared to the old Express.js backend, breeze has
- Streamed uploading - Streamed uploading
@ -17,10 +15,10 @@ I wrote breeze with the intention of running it in a container, but it runs just
Either way, you need to start off by cloning the Git repository. Either way, you need to start off by cloning the Git repository.
```bash ```bash
git clone https://git.min.rip/min/breeze.git git clone https://git.min.rip/minish/breeze.git
``` ```
To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below. You can start it using `docker compose up -d`. To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below.
``` ```
version: '3.6' version: '3.6'
@ -31,15 +29,20 @@ services:
volumes: volumes:
- /srv/uploads:/data - /srv/uploads:/data
- ./breeze.toml:/etc/breeze.toml
ports: ports:
- 8000:8000 - 8000:8000
environment:
- BRZ_BASE_URL=http://127.0.0.1:8000
- BRZ_SAVE_PATH=/data
- BRZ_UPLOAD_KEY=hiiiiiiii
- BRZ_CACHE_UPL_MAX_LENGTH=134217728 # allow files up to ~134 MiB to be cached
- BRZ_CACHE_UPL_LIFETIME=1800 # let uploads stay in cache for 30 minutes
- BRZ_CACHE_SCAN_FREQ=60 # scan the cache for expired files if more than 60 seconds have passed since the last scan
- BRZ_CACHE_MEM_CAPACITY=4294967296 # allow 4 GiB of data to be in the cache at once
``` ```
For this configuration, it is expected that: For this configuration, it is expected that there is a clone of the Git repository in the `./breeze` folder. You can start it using `docker compose up -d`.
* there is a clone of the Git repository in the `./breeze` folder.
* there is a `breeze.toml` config file in current directory
* there is a directory at `/srv/uploads` for storing uploads
It can also be installed directly if you have the Rust toolchain installed: It can also be installed directly if you have the Rust toolchain installed:
```bash ```bash
@ -48,59 +51,15 @@ cargo install --path .
## Usage ## Usage
### Hosting ### Hosting
Configuration is read through a toml file. Configuration is read through environment variables, because I wanted to run this using Docker Compose.
```
By default it'll try to read `./breeze.toml`, but you can specify a different path using the `-c`/`--config` command line switch. BRZ_BASE_URL - base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)
BRZ_SAVE_PATH - this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\brzuploads)
Here is an example config file: BRZ_UPLOAD_KEY (optional) - if not empty, the key you specify will be required to upload new files.
```toml BRZ_CACHE_UPL_MAX_LENGTH - this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)
[engine] BRZ_CACHE_UPL_LIFETIME - this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)
# The base URL that the HTTP server will be accessible on. BRZ_CACHE_SCAN_FREQ - this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)
# This is used for formatting upload URLs. BRZ_CACHE_MEM_CAPACITY - this is the amount of memory the cache will hold before dropping entries
# Setting it to "https://picture.wtf" would result in
# upload urls of "https://picture.wtf/p/abcdef.png", etc.
base_url = "http://127.0.0.1:8000"
# The location that uploads will be saved to.
# It should be a path to a directory on disk that you can write to.
save_path = "/data"
# OPTIONAL - If set, the static key specified will be required to upload new files.
# If it is not set, no key will be required.
upload_key = "hiiiiiiii"
# OPTIONAL - specifies what to show when the site is visited on http
# It is sent with text/plain content type.
# There are two variables you can use:
# %uplcount% - total number of uploads present on the server
# %version% - current breeze version (e.g. 0.1.5)
motd = "my image host, currently hosting %uplcount% files"
[engine.cache]
# The file size (in bytes) that a file must be under
# to get cached.
max_length = 134_217_728
# How long a cached upload will remain cached. (in seconds)
upload_lifetime = 1800
# How often the cache will be checked for expired uploads.
# It is not a continuous scan, and only is triggered upon a cache operation.
scan_freq = 60
# How much memory (in bytes) the cache is allowed to consume.
mem_capacity = 4_294_967_295
[http]
# The address that the HTTP server will listen on. (ip:port)
# Use 0.0.0.0 as the IP to listen publicly, 127.0.0.1 only lets your
# computer access it
listen_on = "127.0.0.1:8000"
[logger]
# OPTIONAL - the current log level.
# Default level is warn.
level = "warn"
``` ```
### Uploading ### Uploading

7
archived/Cargo.lock generated
View File

@ -8,7 +8,6 @@ version = "0.2.0"
dependencies = [ dependencies = [
"bytes", "bytes",
"once_cell", "once_cell",
"rustc-hash",
] ]
[[package]] [[package]]
@ -22,9 +21,3 @@ name = "once_cell"
version = "1.3.1" version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1c601810575c99596d4afc46f78a678c80105117c379eb3650cf99b8a21ce5b" checksum = "b1c601810575c99596d4afc46f78a678c80105117c379eb3650cf99b8a21ce5b"
[[package]]
name = "rustc-hash"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2"

View File

@ -6,4 +6,4 @@ license = "MIT"
[dependencies] [dependencies]
bytes = "1.3.0" bytes = "1.3.0"
once_cell = "1.3.1" once_cell = "1.3.1"

View File

@ -29,11 +29,7 @@ impl Archive {
} }
} */ } */
pub fn with_full_scan( pub fn with_full_scan(full_scan_frequency: Duration, entry_lifetime: Duration, capacity: usize) -> Self {
full_scan_frequency: Duration,
entry_lifetime: Duration,
capacity: usize,
) -> Self {
Self { Self {
cache_table: HashMap::with_capacity(256), cache_table: HashMap::with_capacity(256),
full_scan_frequency: Some(full_scan_frequency), full_scan_frequency: Some(full_scan_frequency),
@ -71,7 +67,11 @@ impl Archive {
.map(|cache_entry| &cache_entry.value) .map(|cache_entry| &cache_entry.value)
} }
pub fn get_or_insert<F>(&mut self, key: String, factory: F) -> &Bytes pub fn get_or_insert<F>(
&mut self,
key: String,
factory: F,
) -> &Bytes
where where
F: Fn() -> Bytes, F: Fn() -> Bytes,
{ {
@ -87,15 +87,15 @@ impl Archive {
&occupied.into_mut().value &occupied.into_mut().value
} }
Entry::Vacant(vacant) => { Entry::Vacant(vacant) => &vacant.insert(CacheEntry::new(factory(), self.entry_lifetime)).value,
&vacant
.insert(CacheEntry::new(factory(), self.entry_lifetime))
.value
}
} }
} }
pub fn insert(&mut self, key: String, value: Bytes) -> Option<Bytes> { pub fn insert(
&mut self,
key: String,
value: Bytes,
) -> Option<Bytes> {
let now = SystemTime::now(); let now = SystemTime::now();
self.try_full_scan_expired_items(now); self.try_full_scan_expired_items(now);
@ -144,7 +144,7 @@ impl Archive {
Some(()) Some(())
} }
None => None, None => None
} }
} }

View File

@ -1,81 +0,0 @@
use std::{path::PathBuf, time::Duration};
use serde::Deserialize;
use serde_with::{serde_as, DisplayFromStr, DurationSeconds};
use tracing_subscriber::filter::LevelFilter;
#[derive(Deserialize)]
pub struct Config {
pub engine: EngineConfig,
pub http: HttpConfig,
pub logger: LoggerConfig,
}
fn default_motd() -> String {
"breeze file server (v%version%) - currently hosting %uplcount% files".to_string()
}
#[derive(Deserialize)]
pub struct EngineConfig {
/// The url that the instance of breeze is meant to be accessed from.
///
/// ex: https://picture.wtf would generate links like https://picture.wtf/p/abcdef.png
pub base_url: String,
/// Location on disk the uploads are to be saved to
pub save_path: PathBuf,
/// Authentication key for new uploads, will be required if this is specified. (optional)
#[serde(default)]
pub upload_key: String,
/// Configuration for cache system
pub cache: CacheConfig,
/// Motd displayed when the server's index page is visited.
///
/// This isn't explicitly engine-related but the engine is what gets passed to routes,
/// so it is here for now.
#[serde(default = "default_motd")]
pub motd: String,
}
#[serde_as]
#[derive(Deserialize)]
pub struct CacheConfig {
/// The maximum length in bytes that a file can be
/// before it skips cache (in seconds)
pub max_length: usize,
/// The amount of time a file can last inside the cache (in seconds)
#[serde_as(as = "DurationSeconds")]
pub upload_lifetime: Duration,
/// How often the cache is to be scanned for
/// expired entries (in seconds)
#[serde_as(as = "DurationSeconds")]
pub scan_freq: Duration,
/// How much memory the cache is allowed to use (in bytes)
pub mem_capacity: usize,
}
#[derive(Deserialize)]
pub struct HttpConfig {
pub listen_on: String,
}
fn default_level_filter() -> LevelFilter {
LevelFilter::WARN
}
#[serde_as]
#[derive(Deserialize)]
pub struct LoggerConfig {
/// Minimum level a log must be for it to be shown.
/// This defaults to "warn" if not specified.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "default_level_filter")]
// yes... kind of a hack but serde doesn't have anything better
pub level: LevelFilter,
}

View File

@ -2,6 +2,7 @@ use std::{
ffi::OsStr, ffi::OsStr,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::atomic::{AtomicUsize, Ordering}, sync::atomic::{AtomicUsize, Ordering},
time::Duration,
}; };
use archived::Archive; use archived::Archive;
@ -17,69 +18,70 @@ use tokio::{
}, },
}; };
use tokio_stream::StreamExt; use tokio_stream::StreamExt;
use tracing::{debug, error, info};
use walkdir::WalkDir; use walkdir::WalkDir;
use crate::{ use crate::view::{ViewError, ViewSuccess};
config,
view::{ViewError, ViewSuccess},
};
/// breeze engine! this is the core of everything
pub struct Engine { pub struct Engine {
/// The in-memory cache that cached uploads are stored in. // state
cache: RwLock<Archive>, cache: RwLock<Archive>, // in-memory cache
pub upl_count: AtomicUsize, // cached count of uploaded files
/// Cached count of uploaded files. // config
pub upl_count: AtomicUsize, pub base_url: String, // base url for formatting upload urls
save_path: PathBuf, // where uploads are saved to disk
pub upload_key: String, // authorisation key for uploading new files
/// Engine configuration cache_max_length: usize, // if an upload is bigger than this size, it won't be cached
pub cfg: config::EngineConfig,
} }
impl Engine { impl Engine {
/// Creates a new instance of the breeze engine. // create a new engine
pub fn new(cfg: config::EngineConfig) -> Self { pub fn new(
base_url: String,
save_path: PathBuf,
upload_key: String,
cache_max_length: usize,
cache_lifetime: Duration,
cache_full_scan_freq: Duration, // how often the cache will be scanned for expired items
cache_mem_capacity: usize,
) -> Self {
Self { Self {
cache: RwLock::new(Archive::with_full_scan( cache: RwLock::new(Archive::with_full_scan(
cfg.cache.scan_freq, cache_full_scan_freq,
cfg.cache.upload_lifetime, cache_lifetime,
cfg.cache.mem_capacity, cache_mem_capacity,
)), )),
upl_count: AtomicUsize::new( upl_count: AtomicUsize::new(WalkDir::new(&save_path).min_depth(1).into_iter().count()), // count the amount of files in the save path and initialise our cached count with it
WalkDir::new(&cfg.save_path)
.min_depth(1)
.into_iter()
.count(),
), // count the amount of files in the save path and initialise our cached count with it
cfg, base_url,
save_path,
upload_key,
cache_max_length,
} }
} }
/// Returns if an upload would be able to be cached
#[inline(always)]
fn will_use_cache(&self, length: usize) -> bool { fn will_use_cache(&self, length: usize) -> bool {
length <= self.cfg.cache.max_length length <= self.cache_max_length
} }
/// Check if an upload exists in cache or on disk // checks in cache or disk for an upload using a pathbuf
pub async fn upload_exists(&self, path: &Path) -> bool { pub async fn upload_exists(&self, path: &Path) -> bool {
let cache = self.cache.read().await; let cache = self.cache.read().await;
// extract file name, since that's what cache uses // check if upload is in cache
let name = path let name = path
.file_name() .file_name()
.and_then(OsStr::to_str) .and_then(OsStr::to_str)
.unwrap_or_default() .unwrap_or_default()
.to_string(); .to_string();
// check in cache
if cache.contains_key(&name) { if cache.contains_key(&name) {
return true; return true;
} }
// check on disk // check if upload is on disk
if path.exists() { if path.exists() {
return true; return true;
} }
@ -87,10 +89,7 @@ impl Engine {
return false; return false;
} }
/// Generate a new save path for an upload. // generate a new save path for an upload
///
/// This will call itself recursively if it picks
/// a name that's already used. (it is rare)
#[async_recursion::async_recursion] #[async_recursion::async_recursion]
pub async fn gen_path(&self, original_path: &PathBuf) -> PathBuf { pub async fn gen_path(&self, original_path: &PathBuf) -> PathBuf {
// generate a 6-character alphanumeric string // generate a 6-character alphanumeric string
@ -108,7 +107,7 @@ impl Engine {
.to_string(); .to_string();
// path on disk // path on disk
let mut path = self.cfg.save_path.clone(); let mut path = self.save_path.clone();
path.push(&id); path.push(&id);
path.set_extension(original_extension); path.set_extension(original_extension);
@ -120,8 +119,7 @@ impl Engine {
} }
} }
/// Process an upload. // process an upload. this is called by the new route
/// This is called by the /new route.
pub async fn process_upload( pub async fn process_upload(
&self, &self,
path: PathBuf, path: PathBuf,
@ -195,20 +193,25 @@ impl Engine {
self.upl_count.fetch_add(1, Ordering::Relaxed); self.upl_count.fetch_add(1, Ordering::Relaxed);
} }
/// Read an upload from cache, if it exists. // read an upload from cache, if it exists
/// // previously, this would lock the cache as writable to renew the upload's cache lifespan
/// Previously, this would lock the cache as // locking the cache as readable allows multiple concurrent readers, which allows me to handle multiple views concurrently
/// writable to renew the upload's cache lifespan.
/// Locking the cache as readable allows multiple concurrent
/// readers though, which allows me to handle multiple views concurrently.
async fn read_cached_upload(&self, name: &String) -> Option<Bytes> { async fn read_cached_upload(&self, name: &String) -> Option<Bytes> {
let cache = self.cache.read().await; let cache = self.cache.read().await;
if !cache.contains_key(name) {
return None;
}
// fetch upload data from cache // fetch upload data from cache
cache.get(name).map(ToOwned::to_owned) let data = cache
.get(name)
.expect("failed to read get upload data from cache")
.to_owned();
Some(data)
} }
/// Reads an upload, from cache or on disk.
pub async fn get_upload(&self, original_path: &Path) -> Result<ViewSuccess, ViewError> { pub async fn get_upload(&self, original_path: &Path) -> Result<ViewSuccess, ViewError> {
// extract upload file name // extract upload file name
let name = original_path let name = original_path
@ -218,7 +221,7 @@ impl Engine {
.to_string(); .to_string();
// path on disk // path on disk
let mut path = self.cfg.save_path.clone(); let mut path = self.save_path.clone();
path.push(&name); path.push(&name);
// check if the upload exists, if not then 404 // check if the upload exists, if not then 404
@ -230,24 +233,18 @@ impl Engine {
let cached_data = self.read_cached_upload(&name).await; let cached_data = self.read_cached_upload(&name).await;
if let Some(data) = cached_data { if let Some(data) = cached_data {
info!("got upload from cache!"); info!("got upload from cache!!");
Ok(ViewSuccess::FromCache(data)) Ok(ViewSuccess::FromCache(data))
} else { } else {
// we already know the upload exists by now so this is okay
let mut file = File::open(&path).await.unwrap(); let mut file = File::open(&path).await.unwrap();
// read upload length from disk // read upload length from disk
let metadata = file.metadata().await; let length = file
.metadata()
if metadata.is_err() { .await
error!("failed to get upload file metadata!"); .expect("failed to read upload file metadata")
return Err(ViewError::InternalServerError); .len() as usize;
}
let metadata = metadata.unwrap();
let length = metadata.len() as usize;
debug!("read upload from disk, size = {}", length); debug!("read upload from disk, size = {}", length);

View File

@ -2,20 +2,20 @@ use std::sync::{atomic::Ordering, Arc};
use axum::extract::State; use axum::extract::State;
/// Show index status page with amount of uploaded files // show index status page with amount of uploaded files
pub async fn index(State(engine): State<Arc<crate::engine::Engine>>) -> String { pub async fn index(State(engine): State<Arc<crate::engine::Engine>>) -> String {
let count = engine.upl_count.load(Ordering::Relaxed); let count = engine.upl_count.load(Ordering::Relaxed);
let motd = engine.cfg.motd.clone(); format!("minish's image host, currently hosting {} files", count)
motd
.replace("%version%", env!("CARGO_PKG_VERSION"))
.replace("%uplcount%", &count.to_string())
} }
// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!(
"User-Agent: *\n",
"Disallow: /p/*\n",
"Allow: /\n"
);
pub async fn robots_txt() -> &'static str { pub async fn robots_txt() -> &'static str {
/// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!("User-Agent: *\n", "Disallow: /p/*\n", "Allow: /\n");
ROBOTS_TXT ROBOTS_TXT
} }

View File

@ -1,56 +1,63 @@
use std::{path::PathBuf, sync::Arc}; use std::{env, path::PathBuf, sync::Arc, time::Duration};
extern crate axum; extern crate axum;
use clap::Parser; #[macro_use]
extern crate log;
use engine::Engine; use engine::Engine;
use axum::{ use axum::{
routing::{get, post}, routing::{get, post},
Router, Router,
}; };
use tokio::{fs, signal}; use tokio::signal;
use tracing::{info, warn};
mod config;
mod engine; mod engine;
mod index; mod index;
mod new; mod new;
mod view; mod view;
#[derive(Parser, Debug)]
struct Args {
/// The path to configuration file
#[arg(short, long, value_name = "file")]
config: PathBuf,
}
#[tokio::main] #[tokio::main]
async fn main() { async fn main() {
// read & parse args // initialise logger
let args = Args::parse(); pretty_env_logger::init();
// read & parse config // read env vars
let config_str = fs::read_to_string(args.config) let base_url = env::var("BRZ_BASE_URL").expect("missing BRZ_BASE_URL! base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)");
.await let save_path = env::var("BRZ_SAVE_PATH").expect("missing BRZ_SAVE_PATH! this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\\brzuploads)");
.expect("failed to read config file! make sure it exists and you have read permissions"); let upload_key = env::var("BRZ_UPLOAD_KEY").unwrap_or_default();
let cache_max_length = env::var("BRZ_CACHE_UPL_MAX_LENGTH").expect("missing BRZ_CACHE_UPL_MAX_LENGTH! this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)");
let cache_upl_lifetime = env::var("BRZ_CACHE_UPL_LIFETIME").expect("missing BRZ_CACHE_UPL_LIFETIME! this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)");
let cache_scan_freq = env::var("BRZ_CACHE_SCAN_FREQ").expect("missing BRZ_CACHE_SCAN_FREQ! this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)");
let cache_mem_capacity = env::var("BRZ_CACHE_MEM_CAPACITY").expect("missing BRZ_CACHE_MEM_CAPACITY! this is the amount of memory the cache will hold before dropping entries");
let cfg: config::Config = toml::from_str(&config_str).expect("invalid config! check that you have included all required options and structured it properly (no config options expecting a number getting a string, etc.)"); // parse env vars
let save_path = PathBuf::from(save_path);
let cache_max_length = cache_max_length.parse::<usize>().expect("failed parsing BRZ_CACHE_UPL_MAX_LENGTH! it should be a positive number without any separators");
let cache_upl_lifetime = Duration::from_secs(cache_upl_lifetime.parse::<u64>().expect("failed parsing BRZ_CACHE_UPL_LIFETIME! it should be a positive number without any separators"));
let cache_scan_freq = Duration::from_secs(cache_scan_freq.parse::<u64>().expect("failed parsing BRZ_CACHE_SCAN_FREQ! it should be a positive number without any separators"));
let cache_mem_capacity = cache_mem_capacity.parse::<usize>().expect("failed parsing BRZ_CACHE_MEM_CAPACITY! it should be a positive number without any separators");
tracing_subscriber::fmt() if !save_path.exists() || !save_path.is_dir() {
.with_max_level(cfg.logger.level)
.init();
if !cfg.engine.save_path.exists() || !cfg.engine.save_path.is_dir() {
panic!("the save path does not exist or is not a directory! this is invalid"); panic!("the save path does not exist or is not a directory! this is invalid");
} }
if cfg.engine.upload_key.is_empty() { if upload_key.is_empty() {
warn!("engine upload_key is empty! no key will be required for uploading new files"); // i would prefer this to be a warning but the default log level hides those
error!("upload key (BRZ_UPLOAD_KEY) is empty! no key will be required for uploading new files");
} }
// create engine // create engine
let engine = Engine::new(cfg.engine); let engine = Engine::new(
base_url,
save_path,
upload_key,
cache_max_length,
cache_upl_lifetime,
cache_scan_freq,
cache_mem_capacity,
);
// build main router // build main router
let app = Router::new() let app = Router::new()
@ -61,16 +68,11 @@ async fn main() {
.with_state(Arc::new(engine)); .with_state(Arc::new(engine));
// start web server // start web server
axum::Server::bind( axum::Server::bind(&"0.0.0.0:8000".parse().unwrap())
&cfg.http .serve(app.into_make_service())
.listen_on .with_graceful_shutdown(shutdown_signal())
.parse() .await
.expect("failed to parse listen_on address"), .unwrap();
)
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal())
.await
.expect("failed to start server");
} }
async fn shutdown_signal() { async fn shutdown_signal() {
@ -97,4 +99,4 @@ async fn shutdown_signal() {
} }
info!("shutting down!"); info!("shutting down!");
} }

View File

@ -6,21 +6,17 @@ use axum::{
}; };
use hyper::{header, HeaderMap, StatusCode}; use hyper::{header, HeaderMap, StatusCode};
/// The request handler for the /new path.
/// This handles all new uploads.
#[axum::debug_handler] #[axum::debug_handler]
pub async fn new( pub async fn new(
State(engine): State<Arc<crate::engine::Engine>>, State(engine): State<Arc<crate::engine::Engine>>,
Query(params): Query<HashMap<String, String>>,
headers: HeaderMap, headers: HeaderMap,
Query(params): Query<HashMap<String, String>>,
stream: BodyStream, stream: BodyStream,
) -> Result<String, StatusCode> { ) -> Result<String, StatusCode> {
let key = params.get("key"); let key = params.get("key");
const EMPTY_STRING: &String = &String::new();
// check upload key, if i need to // check upload key, if i need to
if !engine.cfg.upload_key.is_empty() && key.unwrap_or(EMPTY_STRING) != &engine.cfg.upload_key { if !engine.upload_key.is_empty() && key.unwrap_or(&String::new()) != &engine.upload_key {
return Err(StatusCode::FORBIDDEN); return Err(StatusCode::FORBIDDEN);
} }
@ -40,7 +36,7 @@ pub async fn new(
.unwrap_or_default() .unwrap_or_default()
.to_string(); .to_string();
let url = format!("{}/p/{}", engine.cfg.base_url, name); let url = format!("{}/p/{}", engine.base_url, name);
// read and parse content-length, and if it fails just assume it's really high so it doesn't cache // read and parse content-length, and if it fails just assume it's really high so it doesn't cache
let content_length = headers let content_length = headers

View File

@ -13,43 +13,22 @@ use bytes::Bytes;
use hyper::{http::HeaderValue, StatusCode}; use hyper::{http::HeaderValue, StatusCode};
use tokio::{fs::File, runtime::Handle}; use tokio::{fs::File, runtime::Handle};
use tokio_util::io::ReaderStream; use tokio_util::io::ReaderStream;
use tracing::{error, debug, info};
/// Responses for a successful view operation
pub enum ViewSuccess { pub enum ViewSuccess {
/// A file read from disk, suitable for larger files.
///
/// The file provided will be streamed from disk and
/// back to the viewer.
///
/// This is only ever used if a file exceeds the
/// cache's maximum file size.
FromDisk(File), FromDisk(File),
/// A file read from in-memory cache, best for smaller files.
///
/// The file is taken from the cache in its entirety
/// and sent back to the viewer.
///
/// If a file can be fit into cache, this will be
/// used even if it's read from disk.
FromCache(Bytes), FromCache(Bytes),
} }
/// Responses for a failed view operation
pub enum ViewError { pub enum ViewError {
/// Will send status code 404 with a plaintext "not found" message. NotFound, // 404
NotFound, InternalServerError, // 500
/// Will send status code 500 with a plaintext "internal server error" message.
InternalServerError,
} }
impl IntoResponse for ViewSuccess { impl IntoResponse for ViewSuccess {
fn into_response(self) -> Response { fn into_response(self) -> Response {
match self { match self {
ViewSuccess::FromDisk(file) => { ViewSuccess::FromDisk(file) => {
// get handle to current tokio runtime // get handle to current runtime
// i use this to block on futures here (not async) // i use this to block on futures here (not async)
let handle = Handle::current(); let handle = Handle::current();
let _ = handle.enter(); let _ = handle.enter();
@ -109,21 +88,24 @@ impl IntoResponse for ViewSuccess {
impl IntoResponse for ViewError { impl IntoResponse for ViewError {
fn into_response(self) -> Response { fn into_response(self) -> Response {
match self { match self {
ViewError::NotFound => ( ViewError::NotFound => {
StatusCode::NOT_FOUND, // convert string into response, change status code
"not found!" let mut res = "not found!".into_response();
).into_response(), *res.status_mut() = StatusCode::NOT_FOUND;
ViewError::InternalServerError => ( res
StatusCode::INTERNAL_SERVER_ERROR, }
"internal server error!" ViewError::InternalServerError => {
).into_response(), // convert string into response, change status code
let mut res = "internal server error!".into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
} }
} }
} }
/// The request handler for /p/* path.
/// All file views are handled here.
#[axum::debug_handler] #[axum::debug_handler]
pub async fn view( pub async fn view(
State(engine): State<Arc<crate::engine::Engine>>, State(engine): State<Arc<crate::engine::Engine>>,
@ -134,7 +116,7 @@ pub async fn view(
.components() .components()
.any(|x| !matches!(x, Component::Normal(_))) .any(|x| !matches!(x, Component::Normal(_)))
{ {
info!("a request attempted path traversal"); warn!("a request attempted path traversal");
return Err(ViewError::NotFound); return Err(ViewError::NotFound);
} }