v0.1.0 Release Notes¶
Release Date: 2026-01-10
Omnistorage v0.1.0 is the initial release of a unified storage abstraction layer for Go, inspired by rclone. It provides a single interface for reading and writing to various storage backends with composable layers for compression and record framing.
Highlights¶
- Unified Interface - Single API for multiple storage backends
- 5 Backends - File, Memory, S3, SFTP, and Channel
- Sync Engine - rclone-inspired file synchronization with ~95% feature parity
- Composable Layers - Compression (gzip, zstd) and format (NDJSON) wrappers
- Extended Interface - Optional metadata, server-side copy/move, and capability discovery
Installation¶
What's Included¶
Core Interfaces¶
| Interface | Description |
|---|---|
Backend | Core read/write interface with NewWriter, NewReader, Exists, Delete, List, Close |
ExtendedBackend | Adds Stat, Mkdir, Rmdir, Copy, Move, and Features methods |
RecordWriter | Line/record-oriented writing for streaming data |
RecordReader | Line/record-oriented reading for streaming data |
ObjectInfo | File metadata (Size, ModTime, Hash, ContentType) |
Features | Backend capability discovery |
Backends¶
| Backend | Package | Extended | Description |
|---|---|---|---|
| File | backend/file | Yes | Local filesystem storage |
| Memory | backend/memory | Yes | In-memory storage for testing |
| S3 | backend/s3 | Yes | AWS S3, Cloudflare R2, MinIO, Wasabi |
| SFTP | backend/sftp | Yes | SSH file transfer with password/key auth |
| Channel | backend/channel | No | Go channel for inter-goroutine streaming |
Compression¶
| Format | Package | Description |
|---|---|---|
| Gzip | compress/gzip | Standard gzip compression |
| Zstandard | compress/zstd | High-performance Zstd compression |
Format Layers¶
| Format | Package | Description |
|---|---|---|
| NDJSON | format/ndjson | Newline-delimited JSON record framing |
Sync Engine¶
The sync package provides rclone-like file synchronization:
| Function | Description |
|---|---|
Sync() | One-way sync (mirror destination to source) |
Copy() | Copy files without deleting extras |
Bisync() | Bidirectional sync with conflict resolution |
Check() | Compare files between backends |
Verify() | Verify file integrity |
Sync Features:
- Parallel transfers with configurable concurrency
- Bandwidth limiting (token bucket algorithm)
- Retry with exponential backoff and jitter
- Progress callbacks for real-time status
- Dry-run mode for testing
- Structured logging via slog
Filtering:
- Include/exclude patterns (glob syntax)
- Size filters (MinSize, MaxSize)
- Age filters (MinAge, MaxAge)
- Filter from file
Conflict Resolution (Bisync):
NewerWins- Newer file overwrites olderLargerWins- Larger file overwrites smallerSourceWins- First backend always winsDestWins- Second backend always winsKeepBoth- Keep both with conflict suffixSkip- Skip conflicting filesError- Report as error
Multi-Writer¶
Fan-out writing to multiple backends with three modes:
WriteAll- All backends must succeedWriteBestEffort- Continue on failuresWriteQuorum- Majority must succeed
Utilities¶
| Function | Description |
|---|---|
CopyPath() | Copy between any two backends |
MovePath() | Move by copy-then-delete |
SmartMove() | Server-side move with fallback |
AsExtended() | Safe type assertion to ExtendedBackend |
Quick Start¶
Basic Read/Write¶
import (
"context"
"github.com/grokify/omnistorage/backend/file"
)
func main() {
ctx := context.Background()
backend := file.New(file.Config{Root: "/data"})
defer backend.Close()
// Write
w, _ := backend.NewWriter(ctx, "hello.txt")
w.Write([]byte("Hello, World!"))
w.Close()
// Read
r, _ := backend.NewReader(ctx, "hello.txt")
data, _ := io.ReadAll(r)
r.Close()
}
With Compression¶
import (
"github.com/grokify/omnistorage/backend/file"
"github.com/grokify/omnistorage/compress/gzip"
)
// Write compressed
w, _ := backend.NewWriter(ctx, "data.gz")
gz, _ := gzip.NewWriter(w)
gz.Write([]byte("compressed content"))
gz.Close()
Sync Between Backends¶
import "github.com/grokify/omnistorage/sync"
result, err := sync.Sync(ctx, srcBackend, dstBackend, "data/", "backup/", sync.Options{
DeleteExtra: true,
Concurrency: 4,
Progress: func(p sync.Progress) {
fmt.Printf("%s: %d/%d\n", p.Phase, p.FilesTransferred, p.TotalFiles)
},
})
Using the Registry¶
import (
"github.com/grokify/omnistorage"
_ "github.com/grokify/omnistorage/backend/file"
_ "github.com/grokify/omnistorage/backend/s3"
)
// Open by name from configuration
backend, _ := omnistorage.Open("s3", map[string]string{
"bucket": "my-bucket",
"region": "us-east-1",
})
External Backends¶
Some backends are in separate repositories to minimize dependencies:
| Backend | Repository |
|---|---|
| Google Drive | omnistorage-google |
| Google Cloud Storage | omnistorage-google (planned) |
Breaking Changes¶
None - this is the initial release.
Known Limitations¶
- SFTP host key verification is disabled by default (configure
KnownHostsFilefor production) - Dropbox backend is not yet implemented (config only)
- No CLI tool yet (planned for v1.1.0)
Documentation¶
What's Next¶
See the Roadmap for planned features:
- v0.2.0 - Consumer cloud backends (Dropbox, OneDrive)
- v0.3.0 - Security & authentication
- v0.4.0 - Observability (metrics, tracing)
- v1.0.0 - Stable API
- v1.1.0 - CLI tool
Contributing¶
Contributions welcome! Priority areas:
- New backends (follow
backend/fileas template) - Tests (especially integration tests)
- Documentation improvements
- Bug fixes
License¶
MIT License - see LICENSE