Compare commits

..

No commits in common. "3513337ac77ff149e5d825b00640b4f905961e5f" and "6deddc3014b351c69b0bc026cf05799f69d04e67" have entirely different histories.

13 changed files with 351 additions and 1037 deletions

895
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package]
name = "breeze"
version = "0.1.5"
version = "0.1.4"
edition = "2021"
[dependencies]
@ -15,11 +15,6 @@ rand = "0.8.5"
async-recursion = "1.0.0"
walkdir = "2"
futures = "0.3"
tracing = "0.1"
tracing-subscriber = "0.3"
log = "0.4"
pretty_env_logger = "0.5.0"
archived = { path = "./archived" }
xxhash-rust = { version = "0.8.7", features = ["xxh3"] }
serde = { version = "1.0.189", features = ["derive"] }
toml = "0.8.2"
clap = { version = "4.4.6", features = ["derive"] }
serde_with = "3.4.0"

View File

@ -1,12 +1,12 @@
# builder
FROM rust:1.74 as builder
FROM rust:1.73 as builder
WORKDIR /usr/src/breeze
COPY . .
RUN cargo install --path .
# runner
FROM debian:bookworm-slim
FROM debian:bullseye-slim
RUN apt-get update && rm -rf /var/lib/apt/lists/*
@ -16,4 +16,4 @@ RUN useradd -m runner
USER runner
EXPOSE 8000
CMD [ "breeze", "--config", "/etc/breeze.toml" ]
CMD [ "breeze" ]

View File

@ -1,8 +1,6 @@
# breeze
breeze is a simple, performant file upload server.
The primary instance is https://picture.wtf.
## Features
Compared to the old Express.js backend, breeze has
- Streamed uploading
@ -17,10 +15,10 @@ I wrote breeze with the intention of running it in a container, but it runs just
Either way, you need to start off by cloning the Git repository.
```bash
git clone https://git.min.rip/min/breeze.git
git clone https://git.min.rip/minish/breeze.git
```
To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below. You can start it using `docker compose up -d`.
To run it in Docker, I recommend using Docker Compose. An example `docker-compose.yaml` configuration is below.
```
version: '3.6'
@ -31,15 +29,20 @@ services:
volumes:
- /srv/uploads:/data
- ./breeze.toml:/etc/breeze.toml
ports:
- 8000:8000
environment:
- BRZ_BASE_URL=http://127.0.0.1:8000
- BRZ_SAVE_PATH=/data
- BRZ_UPLOAD_KEY=hiiiiiiii
- BRZ_CACHE_UPL_MAX_LENGTH=134217728 # allow files up to ~134 MiB to be cached
- BRZ_CACHE_UPL_LIFETIME=1800 # let uploads stay in cache for 30 minutes
- BRZ_CACHE_SCAN_FREQ=60 # scan the cache for expired files if more than 60 seconds have passed since the last scan
- BRZ_CACHE_MEM_CAPACITY=4294967296 # allow 4 GiB of data to be in the cache at once
```
For this configuration, it is expected that:
* there is a clone of the Git repository in the `./breeze` folder.
* there is a `breeze.toml` config file in current directory
* there is a directory at `/srv/uploads` for storing uploads
For this configuration, it is expected that there is a clone of the Git repository in the `./breeze` folder. You can start it using `docker compose up -d`.
It can also be installed directly if you have the Rust toolchain installed:
```bash
@ -48,59 +51,15 @@ cargo install --path .
## Usage
### Hosting
Configuration is read through a toml file.
By default it'll try to read `./breeze.toml`, but you can specify a different path using the `-c`/`--config` command line switch.
Here is an example config file:
```toml
[engine]
# The base URL that the HTTP server will be accessible on.
# This is used for formatting upload URLs.
# Setting it to "https://picture.wtf" would result in
# upload urls of "https://picture.wtf/p/abcdef.png", etc.
base_url = "http://127.0.0.1:8000"
# The location that uploads will be saved to.
# It should be a path to a directory on disk that you can write to.
save_path = "/data"
# OPTIONAL - If set, the static key specified will be required to upload new files.
# If it is not set, no key will be required.
upload_key = "hiiiiiiii"
# OPTIONAL - specifies what to show when the site is visited on http
# It is sent with text/plain content type.
# There are two variables you can use:
# %uplcount% - total number of uploads present on the server
# %version% - current breeze version (e.g. 0.1.5)
motd = "my image host, currently hosting %uplcount% files"
[engine.cache]
# The file size (in bytes) that a file must be under
# to get cached.
max_length = 134_217_728
# How long a cached upload will remain cached. (in seconds)
upload_lifetime = 1800
# How often the cache will be checked for expired uploads.
# It is not a continuous scan, and only is triggered upon a cache operation.
scan_freq = 60
# How much memory (in bytes) the cache is allowed to consume.
mem_capacity = 4_294_967_295
[http]
# The address that the HTTP server will listen on. (ip:port)
# Use 0.0.0.0 as the IP to listen publicly, 127.0.0.1 only lets your
# computer access it
listen_on = "127.0.0.1:8000"
[logger]
# OPTIONAL - the current log level.
# Default level is warn.
level = "warn"
Configuration is read through environment variables, because I wanted to run this using Docker Compose.
```
BRZ_BASE_URL - base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)
BRZ_SAVE_PATH - this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\brzuploads)
BRZ_UPLOAD_KEY (optional) - if not empty, the key you specify will be required to upload new files.
BRZ_CACHE_UPL_MAX_LENGTH - this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)
BRZ_CACHE_UPL_LIFETIME - this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)
BRZ_CACHE_SCAN_FREQ - this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)
BRZ_CACHE_MEM_CAPACITY - this is the amount of memory the cache will hold before dropping entries
```
### Uploading

7
archived/Cargo.lock generated
View File

@ -8,7 +8,6 @@ version = "0.2.0"
dependencies = [
"bytes",
"once_cell",
"rustc-hash",
]
[[package]]
@ -22,9 +21,3 @@ name = "once_cell"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1c601810575c99596d4afc46f78a678c80105117c379eb3650cf99b8a21ce5b"
[[package]]
name = "rustc-hash"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2"

View File

@ -6,4 +6,4 @@ license = "MIT"
[dependencies]
bytes = "1.3.0"
once_cell = "1.3.1"
once_cell = "1.3.1"

View File

@ -29,11 +29,7 @@ impl Archive {
}
} */
pub fn with_full_scan(
full_scan_frequency: Duration,
entry_lifetime: Duration,
capacity: usize,
) -> Self {
pub fn with_full_scan(full_scan_frequency: Duration, entry_lifetime: Duration, capacity: usize) -> Self {
Self {
cache_table: HashMap::with_capacity(256),
full_scan_frequency: Some(full_scan_frequency),
@ -71,7 +67,11 @@ impl Archive {
.map(|cache_entry| &cache_entry.value)
}
pub fn get_or_insert<F>(&mut self, key: String, factory: F) -> &Bytes
pub fn get_or_insert<F>(
&mut self,
key: String,
factory: F,
) -> &Bytes
where
F: Fn() -> Bytes,
{
@ -87,15 +87,15 @@ impl Archive {
&occupied.into_mut().value
}
Entry::Vacant(vacant) => {
&vacant
.insert(CacheEntry::new(factory(), self.entry_lifetime))
.value
}
Entry::Vacant(vacant) => &vacant.insert(CacheEntry::new(factory(), self.entry_lifetime)).value,
}
}
pub fn insert(&mut self, key: String, value: Bytes) -> Option<Bytes> {
pub fn insert(
&mut self,
key: String,
value: Bytes,
) -> Option<Bytes> {
let now = SystemTime::now();
self.try_full_scan_expired_items(now);
@ -144,7 +144,7 @@ impl Archive {
Some(())
}
None => None,
None => None
}
}

View File

@ -1,81 +0,0 @@
use std::{path::PathBuf, time::Duration};
use serde::Deserialize;
use serde_with::{serde_as, DisplayFromStr, DurationSeconds};
use tracing_subscriber::filter::LevelFilter;
#[derive(Deserialize)]
pub struct Config {
pub engine: EngineConfig,
pub http: HttpConfig,
pub logger: LoggerConfig,
}
fn default_motd() -> String {
"breeze file server (v%version%) - currently hosting %uplcount% files".to_string()
}
#[derive(Deserialize)]
pub struct EngineConfig {
/// The url that the instance of breeze is meant to be accessed from.
///
/// ex: https://picture.wtf would generate links like https://picture.wtf/p/abcdef.png
pub base_url: String,
/// Location on disk the uploads are to be saved to
pub save_path: PathBuf,
/// Authentication key for new uploads, will be required if this is specified. (optional)
#[serde(default)]
pub upload_key: String,
/// Configuration for cache system
pub cache: CacheConfig,
/// Motd displayed when the server's index page is visited.
///
/// This isn't explicitly engine-related but the engine is what gets passed to routes,
/// so it is here for now.
#[serde(default = "default_motd")]
pub motd: String,
}
#[serde_as]
#[derive(Deserialize)]
pub struct CacheConfig {
/// The maximum length in bytes that a file can be
/// before it skips cache (in seconds)
pub max_length: usize,
/// The amount of time a file can last inside the cache (in seconds)
#[serde_as(as = "DurationSeconds")]
pub upload_lifetime: Duration,
/// How often the cache is to be scanned for
/// expired entries (in seconds)
#[serde_as(as = "DurationSeconds")]
pub scan_freq: Duration,
/// How much memory the cache is allowed to use (in bytes)
pub mem_capacity: usize,
}
#[derive(Deserialize)]
pub struct HttpConfig {
pub listen_on: String,
}
fn default_level_filter() -> LevelFilter {
LevelFilter::WARN
}
#[serde_as]
#[derive(Deserialize)]
pub struct LoggerConfig {
/// Minimum level a log must be for it to be shown.
/// This defaults to "warn" if not specified.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "default_level_filter")]
// yes... kind of a hack but serde doesn't have anything better
pub level: LevelFilter,
}

View File

@ -2,6 +2,7 @@ use std::{
ffi::OsStr,
path::{Path, PathBuf},
sync::atomic::{AtomicUsize, Ordering},
time::Duration,
};
use archived::Archive;
@ -17,69 +18,70 @@ use tokio::{
},
};
use tokio_stream::StreamExt;
use tracing::{debug, error, info};
use walkdir::WalkDir;
use crate::{
config,
view::{ViewError, ViewSuccess},
};
use crate::view::{ViewError, ViewSuccess};
/// breeze engine! this is the core of everything
pub struct Engine {
/// The in-memory cache that cached uploads are stored in.
cache: RwLock<Archive>,
// state
cache: RwLock<Archive>, // in-memory cache
pub upl_count: AtomicUsize, // cached count of uploaded files
/// Cached count of uploaded files.
pub upl_count: AtomicUsize,
// config
pub base_url: String, // base url for formatting upload urls
save_path: PathBuf, // where uploads are saved to disk
pub upload_key: String, // authorisation key for uploading new files
/// Engine configuration
pub cfg: config::EngineConfig,
cache_max_length: usize, // if an upload is bigger than this size, it won't be cached
}
impl Engine {
/// Creates a new instance of the breeze engine.
pub fn new(cfg: config::EngineConfig) -> Self {
// create a new engine
pub fn new(
base_url: String,
save_path: PathBuf,
upload_key: String,
cache_max_length: usize,
cache_lifetime: Duration,
cache_full_scan_freq: Duration, // how often the cache will be scanned for expired items
cache_mem_capacity: usize,
) -> Self {
Self {
cache: RwLock::new(Archive::with_full_scan(
cfg.cache.scan_freq,
cfg.cache.upload_lifetime,
cfg.cache.mem_capacity,
cache_full_scan_freq,
cache_lifetime,
cache_mem_capacity,
)),
upl_count: AtomicUsize::new(
WalkDir::new(&cfg.save_path)
.min_depth(1)
.into_iter()
.count(),
), // count the amount of files in the save path and initialise our cached count with it
upl_count: AtomicUsize::new(WalkDir::new(&save_path).min_depth(1).into_iter().count()), // count the amount of files in the save path and initialise our cached count with it
cfg,
base_url,
save_path,
upload_key,
cache_max_length,
}
}
/// Returns if an upload would be able to be cached
#[inline(always)]
fn will_use_cache(&self, length: usize) -> bool {
length <= self.cfg.cache.max_length
length <= self.cache_max_length
}
/// Check if an upload exists in cache or on disk
// checks in cache or disk for an upload using a pathbuf
pub async fn upload_exists(&self, path: &Path) -> bool {
let cache = self.cache.read().await;
// extract file name, since that's what cache uses
// check if upload is in cache
let name = path
.file_name()
.and_then(OsStr::to_str)
.unwrap_or_default()
.to_string();
// check in cache
if cache.contains_key(&name) {
return true;
}
// check on disk
// check if upload is on disk
if path.exists() {
return true;
}
@ -87,10 +89,7 @@ impl Engine {
return false;
}
/// Generate a new save path for an upload.
///
/// This will call itself recursively if it picks
/// a name that's already used. (it is rare)
// generate a new save path for an upload
#[async_recursion::async_recursion]
pub async fn gen_path(&self, original_path: &PathBuf) -> PathBuf {
// generate a 6-character alphanumeric string
@ -108,7 +107,7 @@ impl Engine {
.to_string();
// path on disk
let mut path = self.cfg.save_path.clone();
let mut path = self.save_path.clone();
path.push(&id);
path.set_extension(original_extension);
@ -120,8 +119,7 @@ impl Engine {
}
}
/// Process an upload.
/// This is called by the /new route.
// process an upload. this is called by the new route
pub async fn process_upload(
&self,
path: PathBuf,
@ -195,20 +193,25 @@ impl Engine {
self.upl_count.fetch_add(1, Ordering::Relaxed);
}
/// Read an upload from cache, if it exists.
///
/// Previously, this would lock the cache as
/// writable to renew the upload's cache lifespan.
/// Locking the cache as readable allows multiple concurrent
/// readers though, which allows me to handle multiple views concurrently.
// read an upload from cache, if it exists
// previously, this would lock the cache as writable to renew the upload's cache lifespan
// locking the cache as readable allows multiple concurrent readers, which allows me to handle multiple views concurrently
async fn read_cached_upload(&self, name: &String) -> Option<Bytes> {
let cache = self.cache.read().await;
if !cache.contains_key(name) {
return None;
}
// fetch upload data from cache
cache.get(name).map(ToOwned::to_owned)
let data = cache
.get(name)
.expect("failed to read get upload data from cache")
.to_owned();
Some(data)
}
/// Reads an upload, from cache or on disk.
pub async fn get_upload(&self, original_path: &Path) -> Result<ViewSuccess, ViewError> {
// extract upload file name
let name = original_path
@ -218,7 +221,7 @@ impl Engine {
.to_string();
// path on disk
let mut path = self.cfg.save_path.clone();
let mut path = self.save_path.clone();
path.push(&name);
// check if the upload exists, if not then 404
@ -230,24 +233,18 @@ impl Engine {
let cached_data = self.read_cached_upload(&name).await;
if let Some(data) = cached_data {
info!("got upload from cache!");
info!("got upload from cache!!");
Ok(ViewSuccess::FromCache(data))
} else {
// we already know the upload exists by now so this is okay
let mut file = File::open(&path).await.unwrap();
// read upload length from disk
let metadata = file.metadata().await;
if metadata.is_err() {
error!("failed to get upload file metadata!");
return Err(ViewError::InternalServerError);
}
let metadata = metadata.unwrap();
let length = metadata.len() as usize;
let length = file
.metadata()
.await
.expect("failed to read upload file metadata")
.len() as usize;
debug!("read upload from disk, size = {}", length);

View File

@ -2,20 +2,20 @@ use std::sync::{atomic::Ordering, Arc};
use axum::extract::State;
/// Show index status page with amount of uploaded files
// show index status page with amount of uploaded files
pub async fn index(State(engine): State<Arc<crate::engine::Engine>>) -> String {
let count = engine.upl_count.load(Ordering::Relaxed);
let motd = engine.cfg.motd.clone();
motd
.replace("%version%", env!("CARGO_PKG_VERSION"))
.replace("%uplcount%", &count.to_string())
format!("minish's image host, currently hosting {} files", count)
}
// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!(
"User-Agent: *\n",
"Disallow: /p/*\n",
"Allow: /\n"
);
pub async fn robots_txt() -> &'static str {
/// robots.txt that tells web crawlers not to list uploads
const ROBOTS_TXT: &str = concat!("User-Agent: *\n", "Disallow: /p/*\n", "Allow: /\n");
ROBOTS_TXT
}
}

View File

@ -1,56 +1,63 @@
use std::{path::PathBuf, sync::Arc};
use std::{env, path::PathBuf, sync::Arc, time::Duration};
extern crate axum;
use clap::Parser;
#[macro_use]
extern crate log;
use engine::Engine;
use axum::{
routing::{get, post},
Router,
};
use tokio::{fs, signal};
use tracing::{info, warn};
use tokio::signal;
mod config;
mod engine;
mod index;
mod new;
mod view;
#[derive(Parser, Debug)]
struct Args {
/// The path to configuration file
#[arg(short, long, value_name = "file")]
config: PathBuf,
}
#[tokio::main]
async fn main() {
// read & parse args
let args = Args::parse();
// initialise logger
pretty_env_logger::init();
// read & parse config
let config_str = fs::read_to_string(args.config)
.await
.expect("failed to read config file! make sure it exists and you have read permissions");
// read env vars
let base_url = env::var("BRZ_BASE_URL").expect("missing BRZ_BASE_URL! base url for upload urls (ex: http://127.0.0.1:8000 for http://127.0.0.1:8000/p/abcdef.png, http://picture.wtf for http://picture.wtf/p/abcdef.png)");
let save_path = env::var("BRZ_SAVE_PATH").expect("missing BRZ_SAVE_PATH! this should be a path where uploads are saved to disk (ex: /srv/uploads, C:\\brzuploads)");
let upload_key = env::var("BRZ_UPLOAD_KEY").unwrap_or_default();
let cache_max_length = env::var("BRZ_CACHE_UPL_MAX_LENGTH").expect("missing BRZ_CACHE_UPL_MAX_LENGTH! this is the max length an upload can be in bytes before it won't be cached (ex: 80000000 for 80MB)");
let cache_upl_lifetime = env::var("BRZ_CACHE_UPL_LIFETIME").expect("missing BRZ_CACHE_UPL_LIFETIME! this indicates how long an upload will stay in cache (ex: 1800 for 30 minutes, 60 for 1 minute)");
let cache_scan_freq = env::var("BRZ_CACHE_SCAN_FREQ").expect("missing BRZ_CACHE_SCAN_FREQ! this is the frequency of full cache scans, which scan for and remove expired uploads (ex: 60 for 1 minute)");
let cache_mem_capacity = env::var("BRZ_CACHE_MEM_CAPACITY").expect("missing BRZ_CACHE_MEM_CAPACITY! this is the amount of memory the cache will hold before dropping entries");
let cfg: config::Config = toml::from_str(&config_str).expect("invalid config! check that you have included all required options and structured it properly (no config options expecting a number getting a string, etc.)");
// parse env vars
let save_path = PathBuf::from(save_path);
let cache_max_length = cache_max_length.parse::<usize>().expect("failed parsing BRZ_CACHE_UPL_MAX_LENGTH! it should be a positive number without any separators");
let cache_upl_lifetime = Duration::from_secs(cache_upl_lifetime.parse::<u64>().expect("failed parsing BRZ_CACHE_UPL_LIFETIME! it should be a positive number without any separators"));
let cache_scan_freq = Duration::from_secs(cache_scan_freq.parse::<u64>().expect("failed parsing BRZ_CACHE_SCAN_FREQ! it should be a positive number without any separators"));
let cache_mem_capacity = cache_mem_capacity.parse::<usize>().expect("failed parsing BRZ_CACHE_MEM_CAPACITY! it should be a positive number without any separators");
tracing_subscriber::fmt()
.with_max_level(cfg.logger.level)
.init();
if !cfg.engine.save_path.exists() || !cfg.engine.save_path.is_dir() {
if !save_path.exists() || !save_path.is_dir() {
panic!("the save path does not exist or is not a directory! this is invalid");
}
if cfg.engine.upload_key.is_empty() {
warn!("engine upload_key is empty! no key will be required for uploading new files");
if upload_key.is_empty() {
// i would prefer this to be a warning but the default log level hides those
error!("upload key (BRZ_UPLOAD_KEY) is empty! no key will be required for uploading new files");
}
// create engine
let engine = Engine::new(cfg.engine);
let engine = Engine::new(
base_url,
save_path,
upload_key,
cache_max_length,
cache_upl_lifetime,
cache_scan_freq,
cache_mem_capacity,
);
// build main router
let app = Router::new()
@ -61,16 +68,11 @@ async fn main() {
.with_state(Arc::new(engine));
// start web server
axum::Server::bind(
&cfg.http
.listen_on
.parse()
.expect("failed to parse listen_on address"),
)
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal())
.await
.expect("failed to start server");
axum::Server::bind(&"0.0.0.0:8000".parse().unwrap())
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal())
.await
.unwrap();
}
async fn shutdown_signal() {
@ -97,4 +99,4 @@ async fn shutdown_signal() {
}
info!("shutting down!");
}
}

View File

@ -6,21 +6,17 @@ use axum::{
};
use hyper::{header, HeaderMap, StatusCode};
/// The request handler for the /new path.
/// This handles all new uploads.
#[axum::debug_handler]
pub async fn new(
State(engine): State<Arc<crate::engine::Engine>>,
Query(params): Query<HashMap<String, String>>,
headers: HeaderMap,
Query(params): Query<HashMap<String, String>>,
stream: BodyStream,
) -> Result<String, StatusCode> {
let key = params.get("key");
const EMPTY_STRING: &String = &String::new();
// check upload key, if i need to
if !engine.cfg.upload_key.is_empty() && key.unwrap_or(EMPTY_STRING) != &engine.cfg.upload_key {
if !engine.upload_key.is_empty() && key.unwrap_or(&String::new()) != &engine.upload_key {
return Err(StatusCode::FORBIDDEN);
}
@ -40,7 +36,7 @@ pub async fn new(
.unwrap_or_default()
.to_string();
let url = format!("{}/p/{}", engine.cfg.base_url, name);
let url = format!("{}/p/{}", engine.base_url, name);
// read and parse content-length, and if it fails just assume it's really high so it doesn't cache
let content_length = headers

View File

@ -13,43 +13,22 @@ use bytes::Bytes;
use hyper::{http::HeaderValue, StatusCode};
use tokio::{fs::File, runtime::Handle};
use tokio_util::io::ReaderStream;
use tracing::{error, debug, info};
/// Responses for a successful view operation
pub enum ViewSuccess {
/// A file read from disk, suitable for larger files.
///
/// The file provided will be streamed from disk and
/// back to the viewer.
///
/// This is only ever used if a file exceeds the
/// cache's maximum file size.
FromDisk(File),
/// A file read from in-memory cache, best for smaller files.
///
/// The file is taken from the cache in its entirety
/// and sent back to the viewer.
///
/// If a file can be fit into cache, this will be
/// used even if it's read from disk.
FromCache(Bytes),
}
/// Responses for a failed view operation
pub enum ViewError {
/// Will send status code 404 with a plaintext "not found" message.
NotFound,
/// Will send status code 500 with a plaintext "internal server error" message.
InternalServerError,
NotFound, // 404
InternalServerError, // 500
}
impl IntoResponse for ViewSuccess {
fn into_response(self) -> Response {
match self {
ViewSuccess::FromDisk(file) => {
// get handle to current tokio runtime
// get handle to current runtime
// i use this to block on futures here (not async)
let handle = Handle::current();
let _ = handle.enter();
@ -109,21 +88,24 @@ impl IntoResponse for ViewSuccess {
impl IntoResponse for ViewError {
fn into_response(self) -> Response {
match self {
ViewError::NotFound => (
StatusCode::NOT_FOUND,
"not found!"
).into_response(),
ViewError::InternalServerError => (
StatusCode::INTERNAL_SERVER_ERROR,
"internal server error!"
).into_response(),
ViewError::NotFound => {
// convert string into response, change status code
let mut res = "not found!".into_response();
*res.status_mut() = StatusCode::NOT_FOUND;
res
}
ViewError::InternalServerError => {
// convert string into response, change status code
let mut res = "internal server error!".into_response();
*res.status_mut() = StatusCode::INTERNAL_SERVER_ERROR;
res
}
}
}
}
/// The request handler for /p/* path.
/// All file views are handled here.
#[axum::debug_handler]
pub async fn view(
State(engine): State<Arc<crate::engine::Engine>>,
@ -134,7 +116,7 @@ pub async fn view(
.components()
.any(|x| !matches!(x, Component::Normal(_)))
{
info!("a request attempted path traversal");
warn!("a request attempted path traversal");
return Err(ViewError::NotFound);
}