chatAI4R

chatAI4R: Chat-Based Interactive Artificial Intelligence for R

CRAN CRAN_latest_release_date CRAN :total status badge chatAI4R status badge

Version 0.4.3 - Enhanced Security & Multi-LLM Capabilities

GitHub/chatAI4R

Description

chatAI4R, Chat-based Interactive Artificial Intelligence for R, is an R package designed to integrate the OpenAI API and other APIs for artificial intelligence (AI) applications. This package leverages large language model (LLM)-based AI techniques, enabling efficient knowledge discovery and data analysis. chatAI4R provides basic R functions for using LLM and a set of R functions to support the creation of prompts for using LLM. The LLMs allow us to extend the world of R. Additionally, I strongly believe that the LLM is becoming so generalized that “Are you searching Google?” is likely to evolve into “Are you LLMing?”.

chatAI4R is an experimental project aimed at developing and implementing various LLM applications in R. Furthermore, the package is under continuous development with a focus on extending its capabilities for bioinformatics analysis.

About this project and future developments

The functionality for interlanguage translation using DeepL has been separated as the ‘deepRstudio’ package. Functions related to text-to-image generation were separated as the ‘stableDiffusion4R’ package.

Installation of the chatAI4R package

1. Start R / RStudio console.

2. Run the following commands in the R console:

CRAN-version installation

# CRAN-version installation
install.packages("chatAI4R")
library(chatAI4R)
# Dev-version installation
devtools::install_github("kumeS/chatAI4R")
library(chatAI4R)

# Release v0.2.3
devtools::install_github("kumeS/chatAI4R", ref = "v0.2.3")
library(chatAI4R)

Installation from source

#For MacOS X, installation from source
system("wget https://github.com/kumeS/chatAI4R/archive/refs/tags/v0.2.3.tar.gz")
#or system("wget https://github.com/kumeS/chatAI4R/archive/refs/tags/v0.2.3.tar.gz --no-check-certificate")
system("R CMD INSTALL v0.2.3.tar.gz")

3. Set the API keys for Multi-API Support

chatAI4R supports multiple AI APIs. Configure the APIs you want to use:

Required: OpenAI API (for most functions)

Register at OpenAI website and obtain your API key.

# Set your OpenAI API key (required)
Sys.setenv(OPENAI_API_KEY = "sk-your-openai-api-key")

Optional: Additional AI APIs (for extended functions)

# Google Gemini API (for gemini4R, geminiGrounding4R)
Sys.setenv(GoogleGemini_API_KEY = "your-gemini-api-key")

# Replicate API (for replicatellmAPI4R)
Sys.setenv(Replicate_API_KEY = "your-replicate-api-key")

# Dify API (for DifyChat4R)
Sys.setenv(DIFY_API_KEY = "your-dify-api-key")

# DeepL API (for discussion_flow functions with translation)
Sys.setenv(DeepL_API_KEY = "your-deepl-api-key")

# io.net API (for multiLLMviaionet functions)
Sys.setenv(IONET_API_KEY = "your-ionet-api-key")

Permanent Configuration

Create an .Rprofile file in your home directory and add your API keys:

# Create a file
file.create("~/.Rprofile") 

# Add all your API keys to the file
cat('
# chatAI4R API Keys Configuration
Sys.setenv(OPENAI_API_KEY = "sk-your-openai-api-key")
Sys.setenv(GoogleGemini_API_KEY = "your-gemini-api-key")
Sys.setenv(Replicate_API_KEY = "your-replicate-api-key")
Sys.setenv(DIFY_API_KEY = "your-dify-api-key")
Sys.setenv(DeepL_API_KEY = "your-deepl-api-key")
Sys.setenv(IONET_API_KEY = "your-ionet-api-key")
', file = "~/.Rprofile", append = TRUE)

# [MacOS X] Open the file and edit it
system("open ~/.Rprofile")

Note: Please be aware of newline character inconsistencies across different operating systems.

Tutorial

Basic usage

Applied usage of the chatAI4R package

Prompts for chatGPT / GPT-4

File Description Prompt
create_flowcharts A prompt to create a flowchart Prompt
create_roxygen2_v01 A prompt to create a roxygen2 description Prompt
create_roxygen2_v02 A prompt to create a roxygen2 description Prompt
edit_DESCRIPTION A prompt to edit DESCRIPTION Prompt
Img2txt_prompt_v01 A prompt to create a i2i prompt Prompt
Img2txt_prompt_v02 A prompt to create a i2i prompt Prompt

R functions

The chatAI4R package is structured as 4 Layered Functions that provide increasingly sophisticated AI capabilities, from basic API access to expert-level data mining and analysis.

🟢 Core Functions (1st Layer)

Access to LLM API / Multi-APIs

Core functions provide direct access to multiple AI APIs, enabling basic AI operations.

Function Description API Service Script Flowchart
chat4R Chat with GPT models using OpenAI API (One-shot) OpenAI Script Flowchart
chat4R_history Use chat history for OpenAI’s GPT model OpenAI Script Flowchart
textEmbedding Text Embedding from OpenAI Embeddings API (1536-dimensional) OpenAI Script Flowchart
vision4R Advanced image analysis and interpretation OpenAI Script  
gemini4R Chat with Google Gemini AI models Google Gemini Script  
replicatellmAPI4R Access various LLM models through Replicate platform Replicate Script  
DifyChat4R Chat and completion endpoints through Dify platform Dify Script  
multiLLMviaionet Execute multiple LLM models simultaneously via io.net API io.net Script  
list_ionet_models List available LLM models on io.net platform io.net Script  
multiLLM_random10 Quick execution of 10 randomly selected models via io.net io.net Script  
multiLLM_random5 Quick execution of 5 randomly selected models via io.net io.net Script  
completions4R ⚠️ DEPRECATED - Generate text using OpenAI completions API (scheduled for removal) OpenAI Script Flowchart

Utility Functions (Non-API)

Function Description Script
slow_print_v2 Slowly print text with typewriter effect Script
ngsub Remove extra spaces and newline characters Script
removeQuotations Remove all types of quotations from text Script

🟡 2nd Layered Functions (Usage/Task)

Execution of simple LLM tasks: Chat memory, translation, proofreading, etc.

These functions combine core APIs to perform specific tasks and maintain conversation context.

Function Description Script Flowchart
conversation4R Manage conversation with persistent history Script Flowchart
TextSummary Summarize long texts with intelligent chunking Script  
TextSummaryAsBullet Summarize selected text into bullet points Script  
revisedText Revision for scientific text Script  
proofreadEnglishText Proofread English text via RStudio API Script  
proofreadText Proofread text with grammar and style correction Script  
enrichTextContent Enrich text content with additional information Script  
convertBullet2Sentence Convert bullet points to sentences Script  

🟠 3rd Layered Functions (Workflow)

LLM Workflow, LLM Bots, R Packaging Supports

Advanced workflow functions that orchestrate multiple AI operations and support complex development tasks.

Function Description Script
discussion_flow_v1 Multi-agent expert system simulation (3 roles) Script
discussion_flow_v2 Enhanced multi-bot conversation system Script
createSpecifications4R Create detailed specifications for R functions Script
createRfunction Create R functions from selected text or clipboard Script
createRcode Generate R code from clipboard content Script
convertRscript2Function Convert R script to structured R function Script
addRoxygenDescription Add Roxygen documentation to R functions Script
OptimizeRcode Optimize and complete R code Script
designPackage Design complete R packages Script
addCommentCode Add intelligent comments to R code Script
checkErrorDet Analyze and explain R error messages Script
autocreateFunction4R UPDATED - Generate and improve R functions (now uses chat4R) Script
supportIdeaGeneration Support idea generation from text input Script

🔴 4th Layered Functions (Expertise)

Data mining & Advanced Analysis

Expert-level functions that provide sophisticated data analysis, pattern recognition, and knowledge extraction capabilities.

Function Description Script
interpretResult Interpret analysis results across 13 analytical domains Script
extractKeywords Extract key concepts and terms from complex text Script
convertScientificLiterature Convert text to scientific literature format Script
summaryWebScrapingText Web scraping with intelligent summarization Script
geminiGrounding4R Advanced AI with Google Search grounding Script
chatAI4pdf Intelligent PDF document analysis and summarization Script
textFileInput4ai Large-scale text file analysis with chunking Script
searchFunction Expert-level R function discovery and recommendation Script

Functions for RIKEN press release (future developments)

Simple usage

One-Shot Chatting

All runs using the chat4R function are One-Shot Chatting. Conversation history is not carried over to the next conversation.

#API: "https://api.openai.com/v1/chat/completions"
chat4R("Hello")

#⚠️ DEPRECATED: OpenAI completions API (scheduled for removal)
# completions4R("Hello")  # Use chat4R() instead

Few-Shots/Chain-Shots Chatting

Executions using the conversation4R function will keep a history of conversations. The number of previous messages to keep in memory defaults to 2.

#First shot
conversation4R("Hello")

#Second shot
conversation4R("Hello")

Text Embedding

Converts input text to a numeric vector. The model text-embedding-ada-002 results in a vector of 1536 floats.

#Embedding
textEmbedding("Hello, world!")

🌟 NEW: Multi-LLM Execution via io.net (v0.4.3)

Execute multiple LLM models simultaneously for comprehensive AI responses across 23+ cutting-edge models.

# Set io.net API key
Sys.setenv(IONET_API_KEY = "your-ionet-api-key")

# Basic multi-LLM execution with latest 2025 models
result <- multiLLMviaionet(
  prompt = "Explain quantum computing",
  models = c("deepseek-ai/DeepSeek-R1-0528",                    # Latest reasoning model
             "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8", # Llama 4 multimodal
             "Qwen/Qwen3-235B-A22B-FP8",                      # Latest Qwen3 MoE
             "mistralai/Magistral-Small-2506",                # Advanced multilingual
             "microsoft/phi-4")                               # Compact powerhouse
)

# 🎲 Quick random 10 model comparison (balanced across families)
result <- multiLLM_random10("What is artificial intelligence?")

# ⚡ Quick random 5 model comparison (for faster testing)
result <- multiLLM_random5("Write a Python function")

# 📋 Explore available models (23+ total as of 2025)
all_models <- list_ionet_models()
print(paste("Total models available:", length(all_models)))

# 🏷️ Browse by category
llama_models <- list_ionet_models("llama")        # Meta Llama series (3 models)
deepseek_models <- list_ionet_models("deepseek")  # DeepSeek reasoning (4 models)  
qwen_models <- list_ionet_models("qwen")          # Alibaba Qwen series (2 models)
mistral_models <- list_ionet_models("mistral")    # Mistral AI series (4 models)
compact_models <- list_ionet_models("compact")    # Efficient models (4 models)
reasoning_models <- list_ionet_models("reasoning") # Math/logic specialists (2 models)

# 📊 Detailed model information
detailed_info <- list_ionet_models(detailed = TRUE)
View(detailed_info)

# 🚀 Advanced usage with custom parameters
result <- multiLLMviaionet(
  prompt = "Design a machine learning pipeline for time series forecasting",
  max_models = 8,
  random_selection = TRUE,
  temperature = 0.3,        # More deterministic for technical tasks
  max_tokens = 2000,        # Longer responses
  streaming = FALSE,        # Wait for complete responses
  parallel = TRUE,          # True async execution
  verbose = TRUE           # Monitor progress
)

# Access comprehensive results
print(result$summary)                    # Execution statistics
lapply(result$results, function(x) {     # Individual model responses
  if(x$success) cat(x$model, ":", substr(x$response, 1, 200), "...\n\n")
})

🔥 Featured Models (2025):

📈 What’s New in v0.4.3 (January 2025)

🔒 Security Enhancements

🚀 New Multi-LLM Capabilities

🛠️ Developer Experience

📊 Function Categories

License

Copyright (c) 2025 Satoshi Kume. Released under the Artistic License 2.0.

Cite

Kume S. (2025) chatAI4R: Chat-based Interactive Artificial Intelligence for R. Version 0.4.3.

#BibTeX
@misc{Kume2025chatAI4R,
  title={chatAI4R: Chat-Based Interactive Artificial Intelligence for R},
  author={Kume, Satoshi}, 
  year={2025},
  version={0.4.3},
  publisher={GitHub}, 
  note={R Package with Multi-LLM Capabilities},
  howpublished={\url{https://github.com/kumeS/chatAI4R}},
}

Contributors