feat: add post feedback system with like/dislike functionality

feat: implement fingerprint-based voting to prevent duplicate votes
feat: add database setup documentation for likes/dislikes feature
feat: update social icons styling for better mobile responsiveness
feat: add node adapter for standalone server deployment
chore: update dependencies including astro and fingerprintjs
fix: move social icons to top of footer for better visibility
refactor: clean up meta tags in PostHead component
docs: add comprehensive database schema and API documentation

feat(components): add BuyMeCoffee component with animated SVG and hover effects

feat(components): implement BuyMeCoffee donation link with styling and animations

feat(components): create BuyMeCoffee component with responsive design and interactive elements

style: update SVG paths with fill-background class for consistent styling

style: update SVG paths and styling for better visual consistency and hover effects

style: update BuyMeCoffee component with new SVG animations and styling

feat: add hover animations and transitions to BuyMeCoffee component

refactor: reorganize SVG paths and groups in BuyMeCoffee component for better readability

The changes include:
- Adding new SVG animations and styling for the BuyMeCoffee component
- Implementing hover animations and transitions to enhance user interaction
- Refactoring the SVG structure for improved code organization and maintainability

These changes were made to improve the visual appeal and user experience of the BuyMeCoffee component while keeping the codebase clean and maintainable.

refactor(navbar): simplify class names and remove unused comments
feat(navbar): add dark mode text color support and improve mobile menu styling
feat(navbar): enhance footer with copyright, separator, and open-source link
refactor(navbar): streamline mobile menu button styling and transitions

refactor(consts): update social links and icon map
feat(consts): add Instagram and Phone social links
chore(consts): remove LinkedIn and update icon mappings

chore(blog): remove outdated blog posts
feat(blog): clean up content directory by deleting irrelevant posts

chore(content): remove outdated blog posts

The commit removes a large number of outdated blog posts that were no longer relevant or aligned with the current content strategy. This cleanup helps maintain a more focused and up-to-date blog section.

chore: remove outdated blog posts and clean up content directory

Delete multiple outdated blog post files to streamline the content directory and improve maintainability. The removed posts were no longer relevant and cluttered the repository. This cleanup helps focus on current and future content.

chore: remove outdated blog posts and related content

The commit removes a large number of outdated blog posts and related content from the repository. These files were no longer relevant or maintained, and their removal helps clean up the codebase and reduce clutter. The changes include deleting various markdown files under the `src/content/blog/` directory that covered topics like cybersecurity, data analytics, cloud computing, and cryptocurrency regulation. This cleanup aligns with the project's goal to maintain only current and relevant content.

chore(content): remove outdated blog posts

The commit removes a large number of outdated blog posts that were no longer relevant or aligned with the current content strategy. This cleanup helps maintain a focused and up-to-date content repository.

chore: remove outdated blog content

Deleted multiple outdated blog posts to clean up the repository and remove irrelevant content. The posts were no longer aligned with the current focus and direction of the project. This cleanup helps maintain a more organized and relevant codebase.

chore(content): remove outdated blog posts

Deleted multiple outdated blog posts covering various tech topics including development, startups, and certifications. The content was no longer relevant or aligned with current best practices. This cleanup helps maintain a focused and up-to-date content repository.

chore: remove outdated blog posts

The diff shows the deletion of multiple blog post files that appear to be outdated or no longer relevant. This cleanup will help maintain content quality and relevance on the site.

chore(content): remove outdated and irrelevant blog posts

This commit removes a large number of blog posts that were either outdated, irrelevant, or of low quality. The removed posts covered a wide range of topics including quantum computing, machine learning, cloud computing, and various technical tutorials. Many of these posts were auto-generated or contained generic content that didn't provide real value to readers.

The removal of these posts helps:
- Improve overall content quality
- Reduce maintenance burden
- Focus on more relevant and valuable content
- Clean up the repository structure

No existing links or references to these posts were being maintained, so their removal shouldn't impact users. This cleanup aligns with our goal of maintaining a focused, high-quality content repository.

chore(content): remove outdated blog posts

The commit removes a large number of outdated blog posts that were no longer relevant or maintained. This cleanup helps keep the content fresh and focused on current topics.

chore(content): remove outdated blog posts

The commit removes a large number of outdated blog post files that were no longer relevant or needed. This cleanup helps declutter the content directory and removes potentially stale or incorrect information. The files deleted covered a wide range of tech-related topics but were determined to be no longer useful for the current site.

chore(content): remove outdated blog posts

Deleted multiple outdated blog posts covering various tech topics including AI, edge computing, blockchain, and sustainability. These posts were no longer relevant or accurate given recent advancements in technology. The removal helps maintain content quality and ensures readers only access up-to-date information.

chore(content): remove all blog posts to clean up repository

This commit removes all existing blog post content files from the repository. The files were deleted to clean up the content directory and prepare for new content to be added in the future. The removal includes a wide range of blog posts covering various tech topics, indicating a complete content refresh is planned.

chore(content): remove outdated blog posts and articles

The commit removes a large number of outdated blog posts and articles from the content directory. These files were likely stale content that was no longer relevant or useful. The removal helps clean up the repository and maintain only current, valuable content.

 *::before,
   *::after {
     @apply border-border;
   }
+
   body {
     @apply bg-background text-foreground font-sans;
     font-feature-settings:
       'rlig' 1,
       'calt' 1;
   }
+
   h1,
   h2,
   h3,
   h4,
   h5,
   h6 {
-    @apply font-custom;
+    @apply font-custom scroll-mt-20;
   }
+
+  h1 {
+    @apply text-4xl font-bold;
+  }
+
+  h2 {
+    @apply text-3xl font-bold;
+  }
+
+  h3 {
+    @apply text-2xl font-bold;
+  }
+
+  h4 {
+    @apply text-xl font-bold;
+  }
+
+  h5 {
+    @apply text-lg font-bold;
+  }
+
+  h6 {
+    @apply text-base font-bold;
+  }
+
+  p {
+    @apply text-base;
+  }
+
+  a {
+    @apply text-primary hover:text-primary-foreground transition-colors;
+  }
+
+  code {
+    @apply font-mono text-sm bg-muted px-1 py-0.5 rounded;
+  }
+
+  pre {
+    @apply font-mono text-sm bg-muted p-4 rounded overflow-x-auto;
+  }
+
+  blockquote {
+    @apply border-l-4 border-primary pl-4 italic;
+  }
+
+  ul {
+    @apply list-disc pl-5;
+  }
+
+  ol {
+    @apply list-decimal pl-5;
+  }
+
+  li {
+    @apply mb-1;
+  }
+
+  table {
+    @apply w-full border-collapse;
+  }
+
+  th {
+    @apply bg-muted text-left p-2 border;
+  }
+
+  td {
+    @apply p-2 border;
+  }
+
+  img {
+    @apply max-w-full h-auto;
+  }
+
+  hr {
+    @apply border-t border-border my-4;
+  }
 }
This commit is contained in:
cojocaru-david
2025-05-01 01:40:16 +03:00
parent 3f96471c49
commit 0c90442415
424 changed files with 2517 additions and 36988 deletions

View File

@@ -1,110 +0,0 @@
---
title: "10 ways to optimize your sql queries"
description: "Explore 10 ways to optimize your sql queries in this detailed guide, offering insights, strategies, and practical tips to enhance your understanding and application of the topic."
date: 2025-04-11
tags: ["ways", "optimize", "your", "queries"]
authors: ["Cojocaru David", "ChatGPT"]
---
# 10 Proven Ways to Supercharge Your SQL Query Performance
Is your database feeling sluggish? Slow SQL queries can cripple application performance, leading to frustrated users and wasted resources. But fear not! This guide reveals **10 proven ways to optimize your SQL queries**, transforming them from performance bottlenecks into streamlined data retrieval engines. We'll cover everything from smart indexing to advanced query restructuring, empowering you to write faster, more efficient database operations.
## 1. Master the Art of Indexing
Indexes are your database's secret weapon for rapid data retrieval, but they're not a magic bullet. Strategic indexing is key.
- **Index High-Cardinality Columns:** Prioritize columns frequently used in `WHERE`, `JOIN`, and `ORDER BY` clauses that contain a wide range of unique values.
- **Avoid Low-Selectivity Indexes:** Skip indexing columns with limited distinct values (e.g., boolean flags). These indexes often hinder more than they help.
- **Leverage Composite Indexes:** For queries involving multiple columns, create composite indexes that cover all relevant fields. This allows the database to retrieve the data directly from the index, avoiding a table lookup.
```sql
CREATE INDEX idx_customer_name ON customers(name);
```
## 2. Fine-Tune Your WHERE Clauses
The `WHERE` clause is your primary filter. Optimizing it is crucial.
- **Prioritize Restrictive Conditions:** Place the most specific and limiting conditions at the beginning of your `WHERE` clause. This helps the database narrow down the result set quickly.
- **Avoid Functions on Indexed Columns:** Applying functions like `YEAR()` to indexed columns prevents the database from using the index effectively. Instead, rewrite your query to directly compare the indexed column with a range of values.
- **Embrace `BETWEEN` for Range Queries:** Replace multiple `OR` conditions with the `BETWEEN` operator for efficient range-based filtering.
## 3. Practice Data Retrieval Minimalism
Fetching only the necessary data minimizes resource consumption and speeds up query execution.
- **Specify Columns in `SELECT` Statements:** Avoid the temptation of `SELECT *`. Explicitly list the columns you need to retrieve.
- **Implement Pagination with `LIMIT`:** When dealing with large result sets, use `LIMIT` to paginate the results and avoid overwhelming the application.
- **Use Standard SQL `FETCH FIRST N ROWS ONLY`:** For consistent portability use `FETCH FIRST N ROWS ONLY` (SQL standard) or the database specific equivalent (`TOP` in SQL Server) to limit result sets.
```sql
SELECT id, name, email FROM users WHERE active = 1 LIMIT 100;
```
## 4. Vanquish the SELECT N+1 Problem
The dreaded SELECT N+1 problem arises when your application executes a separate query for each row returned by the initial query. This is a performance killer.
- **Harness the Power of `JOIN`:** Instead of fetching related data in a loop, use `JOIN` clauses to retrieve all necessary data in a single, efficient query.
- **Utilize Batch Fetching/Eager Loading:** If you're using an ORM, explore features like batch fetching or eager loading to avoid the SELECT N+1 trap.
## 5. Masterful JOIN Operations
Incorrectly structured JOIN operations can be major performance bottlenecks.
- **Prefer `INNER JOIN` Whenever Possible:** `INNER JOIN` is generally more performant than `OUTER JOIN`. Use it whenever you only need matching records.
- **Always Join on Indexed Columns:** Ensure that the columns used in your `JOIN` conditions are properly indexed.
- **Minimize Joined Tables:** Reduce the number of joined tables where possible to simplify the query and improve performance.
## 6. Decode the Secrets of Query Execution Plans
Database engines provide execution plans that reveal how they intend to execute your queries. Use them to identify inefficiencies.
- **Run `EXPLAIN` Before Executing:** Execute `EXPLAIN` (PostgreSQL/MySQL) or `EXPLAIN PLAN` (Oracle) before running your queries.
- **Analyze the Output:** Look for full table scans (indicating missing indexes), inefficient joins, and other performance-related issues in the execution plan.
## 7. Escape the Cursor Curse
Cursors process rows one at a time, leading to slow and inefficient operations.
- **Embrace Set-Based Operations:** Replace cursors with set-based operations like `UPDATE`, `INSERT`, and `DELETE` performed in bulk.
- **Employ Temporary Tables or CTEs:** For complex logic that might seem to require a cursor, explore the use of temporary tables or Common Table Expressions (CTEs).
## 8. Strike the Right Balance with Normalization
Database normalization reduces data redundancy but can increase join complexity.
- **Normalize for Write-Heavy Workloads:** In systems with frequent writes, normalization is crucial to maintain data integrity.
- **Denormalize Selectively for Read-Heavy Scenarios:** For read-intensive applications, consider denormalizing specific tables to reduce the need for joins and improve query performance. However, carefully consider the data integrity implications.
## 9. Unleash the Power of Stored Procedures
Stored procedures are precompiled SQL code stored within the database.
- **Precompile Frequently Used Queries:** Use stored procedures for frequently executed queries to reduce parsing overhead.
- **Minimize Network Round Trips:** Stored procedures reduce the number of network round trips between the application and the database server.
```sql
CREATE PROCEDURE GetActiveUsers()
AS
BEGIN
SELECT id, name FROM users WHERE active = 1;
END;
```
## 10. Embrace Continuous Monitoring and Tuning
Database performance is not a "set it and forget it" affair.
- **Log and Analyze Slow Queries:** Implement a system for logging slow-running queries. Regularly analyze these logs to identify areas for optimization.
- **Adapt Indexes to Changing Query Patterns:** As your application evolves, adjust your indexes to match changing query patterns.
- **Schedule Regular Database Maintenance:** Schedule routine database maintenance tasks, such as `ANALYZE` and `VACUUM` in PostgreSQL, to keep your database running smoothly.
## Conclusion
Optimizing SQL queries is a continuous journey, not a destination. By diligently applying these **10 proven ways to optimize your SQL queries**, you'll not only accelerate response times but also build a more scalable and resilient database system. Remember to regularly monitor performance and adapt your strategies to changing workloads.
> _"The most expensive query is the one you didnt know was slow."_ — Database Performance Wisdom
Start implementing these techniques today and unlock the full potential of your database!