Need a custom feature? We can help!
Integration of SEO into NextJS Pages and Articles
To integrate SEO into your pages and articles, you can leverage the getMetadata
function from your metadata.ts
file. This function allows you to generate SEO metadata dynamically, which can be customized for each page or article. Here's how you can integrate and configure SEO:
Default Metadata Configuration
Use the getMetadata
function to generate default metadata for your pages. This function uses sensible defaults from mainConfig
and allows for partial overrides.
Example usage in a page component:
1import { getMetadata } from "@/lib/configuration/seo/metadata"; 2 3export const metadata = getMetadata(); 4 5export default function Page() { 6 return ( 7 <div> 8 <h1>Welcome to Our Page</h1> 9 {/* Page content */} 10 </div> 11 ); 12}
Custom Metadata Configuration
You can customize the metadata for specific pages or articles by passing an object with the desired properties to getMetadata
.
For example:
1import { getMetadata } from "@/lib/configuration/seo/metadata"; 2 3export const metadata = getMetadata({ 4 title: "10 Productivity Hacks for Modern Professionals", 5 description: 6 "Discover cutting-edge strategies to maximize your work efficiency and achieve more in less time.", 7 openGraph: { 8 images: [ 9 { 10 url: "/images/blog/productivity-tips.jpg", 11 width: 1200, 12 height: 630, 13 }, 14 ], 15 }, 16}); 17 18export default function BlogPost() { 19 return ( 20 <article> 21 <h1>10 Productivity Hacks for Modern Professionals</h1> 22 {/* Blog post content */} 23 </article> 24 ); 25}
Configuring SEO
Default Configuration
- Title and Description: Set default values in
mainConfig
fordefaultTitle
anddefaultDescription
. - OpenGraph and Twitter: Use default values for OpenGraph and Twitter metadata, such as
appUrl
,appName
, anddefaultTwitterHandle
. - Robots and Sitemap: Configure default rules in
robots.ts
andsitemapConfig.ts
to control crawler access and sitemap generation.
Custom Configuration
- Custom Titles and Descriptions: Override defaults by passing specific values to
getMetadata
. - OpenGraph Customization: Customize OpenGraph properties like title, description, images, etc., for better social media sharing.
- Canonical URLs: Use
canonicalUrlRelative
to specify canonical URLs for pages to avoid duplicate content issues. - Extra Tags: Add any additional metadata tags using the
extraTags
parameter.
Example of Custom SEO Configuration
1import { getMetadata } from "@/lib/configuration/seo/metadata"; 2 3export const metadata = getMetadata({ 4 title: "Revolutionize Your Workflow | Our SaaS Solution", 5 description: 6 "Streamline your business processes with our cutting-edge productivity tools.", 7 canonicalUrlRelative: "/landing", 8 openGraph: { 9 images: [ 10 { 11 url: "/custom-landing-image.jpg", 12 width: 1200, 13 height: 630, 14 }, 15 ], 16 }, 17});
Sitemap Configuration
The sitemap configuration is managed in sitemapConfig.ts
. This file defines the structure and inclusion of various routes in your sitemap.
Configuration Options
- Default Domain: Set the
defaultDomain
to your application's base URL. - Static Routes: Define static routes that should always be included in the sitemap.
- Dynamic Sections: Configure dynamic sections like blog posts, categories, authors, and dashboard with an
include
flag to toggle their inclusion.
Example Configuration
1export const sitemapConfig: SitemapConfiguration = { 2 defaultDomain: process.env.NEXT_PUBLIC_APP_URL!, 3 staticRoutes: [ 4 { path: "/", changeFrequency: "monthly" }, 5 { path: "/blog", changeFrequency: "weekly", priority: "0.8" }, 6 ], 7 blogPosts: { 8 include: true, 9 url: "/blog", 10 }, 11 categories: { 12 include: true, 13 url: "/blog/categories", 14 }, 15 authors: { 16 include: true, 17 url: "/blog/authors", 18 }, 19 dashboard: { 20 include: true, 21 url: "/dashboard", 22 }, 23};
Robots Configuration
The robots configuration is managed in robots.ts
. This file defines the rules for web crawlers, specifying which parts of your site can be accessed or restricted.
Configuration Options
- User Agent Rules: Define rules for different user agents (e.g., for all crawlers,
Googlebot
for Google's crawler). - Allow and Disallow Paths: Specify paths that are allowed or disallowed for crawling to protect sensitive areas of your site.
- Sitemap URL: Provide the full URL to your sitemap to help search engines efficiently index your pages.
Example Configuration
1export const robotRulesConfig = { 2 rules: [ 3 { 4 userAgent: "*", 5 allow: ["/"], 6 disallow: ["/admin", "/private"], 7 }, 8 ], 9 sitemap: process.env.NEXT_PUBLIC_APP_URL! + "/sitemap.xml", 10};
Best Practices
- Consistency: Ensure metadata is consistent across different pages and platforms.
- Relevance: Use relevant keywords and descriptions to improve search engine visibility.
- Validation: Test your structured data and metadata using tools like Google's Rich Results Test to ensure correctness.