Skip to content

Conversation

@lasseklovstad
Copy link
Owner

No description provided.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request migrates the file upload functionality from Cloudinary to Azure Blob Storage. The change introduces a new client-side upload flow using Azure SAS tokens, adds database schema changes to support soft-deletion and file type tracking, and removes the dependency on the @unpic/react library.

Changes:

  • Removed @unpic/react dependency and replaced optimized image components with standard HTML img elements
  • Added Azure Blob Storage integration with SAS token-based uploads and new server-side utilities for token generation
  • Introduced database migration adding is_deleted and type columns to track image deletion state and media types, changing from hard to soft deletion

Reviewed changes

Copilot reviewed 19 out of 20 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
pnpm-lock.yaml Removed @unpic/react and related dependencies
package.json Removed @unpic/react package from dependencies
migrations/meta/_journal.json Added migration entry for schema changes
migrations/meta/0013_snapshot.json Complete database schema snapshot after migration
migrations/0013_swift_screwball.sql Added is_deleted and type columns to cloudinary_image table
db/schema.ts Added isDeleted and type fields to cloudinaryImage schema, updated type export
app/utils/env.server.ts Added AZURE_BLOB_NAME and AZURE_BLOB_KEY environment variables
app/utils/cloudinaryUtils.ts Updated getIsVideo to check both URL pattern and type field
app/utils/azure.server.ts New utility for generating Azure SAS tokens for blob uploads
app/routes/event.$eventId.images/uploadFilesAction.ts New schema for validating Azure blob upload form data
app/routes/event.$eventId.images/route.tsx Added Azure blob upload intent handler, maintains legacy Cloudinary path
app/routes/api.sas-token.ts New API endpoint for generating SAS tokens for client-side uploads
app/components/UploadMedia.tsx Replaced server-side multipart upload with client-side Azure blob upload
app/components/CloudImageFullscreen.tsx Replaced @unpic/react Image with standard img element
app/components/CloudImage.tsx Replaced @unpic/react Image with standard img element
app/components/AvatarUser.tsx Replaced @unpic/react Image with standard img element
app/.server/services/cloudinaryService.ts Removed deleteImage function
app/.server/db/report.ts Removed deleteImage call when deleting reports
app/.server/db/gataEvent.ts Added insertAzureBlob, changed deleteEventCloudinaryImage to soft delete, added type field to queries
.env.example Added Azure blob storage environment variables
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +26 to +28
accountKey: process.env.AZURE_BLOB_KEY,
accountName: process.env.AZURE_BLOB_NAME,
containerName,
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The environment variables AZURE_BLOB_KEY and AZURE_BLOB_NAME are accessed without validation that they are defined. If these environment variables are missing, the code will fail at runtime. Since these are required for the Azure upload functionality to work, consider adding validation or using a non-optional type in the schema to ensure they are present.

Copilot uses AI. Check for mistakes.
Comment on lines 76 to 80
await insertAzureBlob(
eventId,
form.files.map((file) => ({
cloudId: file.id,
cloudUrl: `https://${process.env.AZURE_BLOB_NAME}.blob.core.windows.net/${container}/event/${eventId}/${file.id}`,
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The environment variable AZURE_BLOB_NAME is accessed without validation that it is defined. If this environment variable is missing, the URL construction will fail or produce invalid URLs. This is the same issue as in api.sas-token.ts and should be validated to ensure it's present at runtime.

Suggested change
await insertAzureBlob(
eventId,
form.files.map((file) => ({
cloudId: file.id,
cloudUrl: `https://${process.env.AZURE_BLOB_NAME}.blob.core.windows.net/${container}/event/${eventId}/${file.id}`,
const azureBlobName = process.env.AZURE_BLOB_NAME;
if (!azureBlobName) {
throw new Error("Environment variable AZURE_BLOB_NAME is not set");
}
await insertAzureBlob(
eventId,
form.files.map((file) => ({
cloudId: file.id,
cloudUrl: `https://${azureBlobName}.blob.core.windows.net/${container}/event/${eventId}/${file.id}`,

Copilot uses AI. Check for mistakes.

import type { Route } from "./+types/api.sas-token";

const expireInMin = 1;
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SAS tokens expire in only 1 minute, which may not be sufficient for users uploading large files or multiple files on slower connections. Users may encounter failed uploads due to token expiration before the upload completes. Consider increasing the expiration time to at least 5-10 minutes to accommodate slower connections and larger files.

Suggested change
const expireInMin = 1;
const expireInMin = 10;

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,2 @@
ALTER TABLE `cloudinary_image` ADD `is_deleted` integer DEFAULT false NOT NULL;--> statement-breakpoint
ALTER TABLE `cloudinary_image` ADD `type` text; No newline at end of file
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration adds an "is_deleted" column with a default value of false, but there's no index on this column. Given that the getEventCloudinaryImages query filters by "isDeleted = false" (line 192 in gataEvent.ts), queries may become slow as the number of deleted images grows. Consider adding an index on the is_deleted column for better query performance.

Suggested change
ALTER TABLE `cloudinary_image` ADD `type` text;
ALTER TABLE `cloudinary_image` ADD `type` text;
CREATE INDEX idx_cloudinary_image_is_deleted ON `cloudinary_image` (`is_deleted`);

Copilot uses AI. Check for mistakes.
Comment on lines +113 to +123
const createBlobSas = async (input: SASinput) => {
const url = [input.containerName, input.blobName].filter((el) => el).join("/");
const storageUri = new URL(url, `https://${input.accountName}.blob.core.windows.net`);
const queryParams = await getSASqueryParams(input);

storageUri.search = queryParams;

return storageUri.toString();
};

export { createBlobSas };
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The createBlobSas function and its helper functions lack documentation. The implementation involves complex cryptographic operations and Azure-specific SAS token generation logic that would benefit from function-level JSDoc comments explaining the purpose, parameters, and return values. This would improve maintainability and help future developers understand the Azure SAS token generation process.

Copilot uses AI. Check for mistakes.
Comment on lines 76 to 118
const response = await fetch(
href("/api/sas-token") +
"?" +
new URLSearchParams({
eventId: eventId.toString(),
numberOfFiles: files.length.toString(),
}).toString()
);
const sasTokens = (await response.json()) as unknown as Awaited<ReturnType<typeof loader>>;
for (let i = 0; i < files.length; i++) {
try {
const file = files.item(i);
const token = sasTokens[i];
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
if (!token) {
throw new Error("Could not find token");
}
if (!file) {
throw new Error("Could not find file");
}
const dimensions = await getMediaDimensions(file);
const uploadResponse = await uploadNewBlob(file, token);
uploadedFiles.push({
...uploadResponse,
...(dimensions && { width: dimensions.width, height: dimensions.height }),
});
} catch (e: unknown) {
console.error(e);
}
}
if (uploadedFiles.length === 0) return;
uploadedFiles.forEach((file, index) => {
formData.append(`files[${index}].id`, file.id);
formData.append(`files[${index}].type`, file.type);
if (file.width !== undefined) {
formData.append(`files[${index}].width`, file.width.toString());
}
if (file.height !== undefined) {
formData.append(`files[${index}].height`, file.height.toString());
}
});
formData.set("intent", uploadFilesIntent);
await fetcher.submit(formData, { method: "POST", action: actionPath });
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fetcher submission happens after the uploadFiles promise resolves, but there's no error handling if the submission fails. If the Azure blob upload succeeds but the database insertion fails, the files will exist in blob storage but won't be tracked in the database. Consider adding error handling for the fetcher submission and potentially informing the user of the failure.

Suggested change
const response = await fetch(
href("/api/sas-token") +
"?" +
new URLSearchParams({
eventId: eventId.toString(),
numberOfFiles: files.length.toString(),
}).toString()
);
const sasTokens = (await response.json()) as unknown as Awaited<ReturnType<typeof loader>>;
for (let i = 0; i < files.length; i++) {
try {
const file = files.item(i);
const token = sasTokens[i];
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
if (!token) {
throw new Error("Could not find token");
}
if (!file) {
throw new Error("Could not find file");
}
const dimensions = await getMediaDimensions(file);
const uploadResponse = await uploadNewBlob(file, token);
uploadedFiles.push({
...uploadResponse,
...(dimensions && { width: dimensions.width, height: dimensions.height }),
});
} catch (e: unknown) {
console.error(e);
}
}
if (uploadedFiles.length === 0) return;
uploadedFiles.forEach((file, index) => {
formData.append(`files[${index}].id`, file.id);
formData.append(`files[${index}].type`, file.type);
if (file.width !== undefined) {
formData.append(`files[${index}].width`, file.width.toString());
}
if (file.height !== undefined) {
formData.append(`files[${index}].height`, file.height.toString());
}
});
formData.set("intent", uploadFilesIntent);
await fetcher.submit(formData, { method: "POST", action: actionPath });
try {
const response = await fetch(
href("/api/sas-token") +
"?" +
new URLSearchParams({
eventId: eventId.toString(),
numberOfFiles: files.length.toString(),
}).toString()
);
if (!response.ok) {
throw new Error(`Failed to fetch SAS tokens: ${response.status} ${response.statusText}`);
}
const sasTokens = (await response.json()) as unknown as Awaited<ReturnType<typeof loader>>;
for (let i = 0; i < files.length; i++) {
try {
const file = files.item(i);
const token = sasTokens[i];
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
if (!token) {
throw new Error("Could not find token");
}
if (!file) {
throw new Error("Could not find file");
}
const dimensions = await getMediaDimensions(file);
const uploadResponse = await uploadNewBlob(file, token);
uploadedFiles.push({
...uploadResponse,
...(dimensions && { width: dimensions.width, height: dimensions.height }),
});
} catch (e: unknown) {
console.error(e);
}
}
if (uploadedFiles.length === 0) return;
uploadedFiles.forEach((file, index) => {
formData.append(`files[${index}].id`, file.id);
formData.append(`files[${index}].type`, file.type);
if (file.width !== undefined) {
formData.append(`files[${index}].width`, file.width.toString());
}
if (file.height !== undefined) {
formData.append(`files[${index}].height`, file.height.toString());
}
});
formData.set("intent", uploadFilesIntent);
try {
await fetcher.submit(formData, { method: "POST", action: actionPath });
} catch (error) {
console.error("Failed to submit uploaded files to the server", error);
if (typeof window !== "undefined") {
window.alert("Kunne ikke lagre opplastede filer. Vennligst prøv igjen.");
}
}
} catch (error) {
console.error("Failed to prepare or process file upload", error);
if (typeof window !== "undefined") {
window.alert("Kunne ikke fullføre opplastingen. Vennligst prøv igjen.");
}
}

Copilot uses AI. Check for mistakes.
Comment on lines 195 to 197
export const deleteEventCloudinaryImage = async (cloudId: string) => {
await db.delete(cloudinaryImage).where(eq(cloudinaryImage.cloudId, cloudId));
await db.update(cloudinaryImage).set({ isDeleted: true }).where(eq(cloudinaryImage.cloudId, cloudId));
};
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The delete functionality has been changed from hard delete to soft delete (setting isDeleted to true), but there's no mechanism to actually remove the blob from Azure storage. This means deleted images will remain in blob storage indefinitely, consuming storage space and incurring costs. Consider implementing a cleanup job or providing a way to permanently delete blobs from Azure storage when images are soft-deleted.

Copilot uses AI. Check for mistakes.
contentType?: string;
};

const truncatedISO8061Date = (date: Date) => {
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function name contains a typo. "ISO8061" should be "ISO8601". The ISO 8601 standard specifies the format for date and time representations.

Suggested change
const truncatedISO8061Date = (date: Date) => {
const truncatedISO8601Date = (date: Date) => {

Copilot uses AI. Check for mistakes.
Comment on lines +86 to +87
// TODO:
// await notifyParticipantsImagesIsUploaded(loggedInUser, eventId);
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TODO comment indicates that notification functionality has been disabled during the migration to Azure Blob Storage. This means users who uploaded images via the new Azure path won't receive notifications, creating inconsistent behavior between the old Cloudinary path (line 63) and the new Azure path. The notification call should be uncommented to ensure feature parity.

Suggested change
// TODO:
// await notifyParticipantsImagesIsUploaded(loggedInUser, eventId);
await notifyParticipantsImagesIsUploaded(loggedInUser, eventId);

Copilot uses AI. Check for mistakes.
Comment on lines 102 to 104
} catch (e: unknown) {
console.error(e);
}
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error handling silently logs errors and continues processing remaining files. If an upload fails, the user won't be notified of which specific files failed. Consider collecting failed uploads and displaying an error message to the user, or at least indicating partial success when some files fail to upload.

Copilot uses AI. Check for mistakes.
@lasseklovstad lasseklovstad merged commit 37a64ed into main Jan 17, 2026
4 checks passed
@lasseklovstad lasseklovstad deleted the azure-blob-storage branch January 17, 2026 11:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants