1 Commits

Author SHA1 Message Date
an-lee
ef1af39f05 bump v0.1.0-alpha 2024-01-11 13:07:38 +08:00
114 changed files with 937 additions and 4106 deletions

View File

@@ -18,7 +18,3 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
run: yarn publish:enjoy
- if: matrix.os == 'macos-latest'
env:
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
run: yarn run publish --arch=arm64

View File

@@ -17,8 +17,6 @@
- [Enjoy App](./enjoy/README.md)
## * 开发者
### 本地启动
```bash
@@ -31,88 +29,3 @@ yarn start:enjoy
```bash
yarn make:enjoy
```
## * 普通小白用户
方法一:这是**最直接简单的方法**是去 [releases 页面](https://github.com/xiaolai/everyone-can-use-english/tags)下载相应的安装文件。
方法二:如果想要随时**试用更新版本**的话,请按一下步骤操作。
### MacOS 用户
1. 打开命令行工具 Terminal
2. 安装 Homebrew请参阅这篇文章《[从 Terminal 开始…](https://github.com/xiaolai/apple-computer-literacy/blob/main/start-from-terminal.md)》)
3. 安装 yarn
```bash
brew install yarn
```
4. 克隆此仓库至本地,而后安装、启动:
```bash
cd ~
mkdir github
cd github
git clone https://github.com/xiaolai/everyone-can-use-english
cd everyone-can-use-english
yarn install
yarn start:enjoy
```
### Windows 用户
系统要求Windows 10 22H2 以上版本、 [Windows PowerShell 5.1](https://aka.ms/wmf5download) 以上版本、互联网网络连接正常。
1. 将鼠标移至任务栏的 “Windows 徽标” 上单击右键,选择 “PowerShell”
> tips 1 :在最新的 Windows 11 上,你看不到 “PowerShell” 选项,只有 “终端”
>
> tips 2 :不能用管理员权限运行 PowerShell ,否则会导致 Scoop 安装失败
2. 在弹出的 PowerShell 窗口中依次执行运行以下命令安装Scoop
```powershell
# 设置 PowerShell 执行策略
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# 下载安装脚本
irm get.scoop.sh -outfile 'install.ps1'
# 执行安装, --ScoopDir 参数指定 Scoop 安装路径
.\install.ps1 -ScoopDir 'C:\Scoop'
```
如果出现下面的错误:
> <span style="color:red">irm : 未能解析此远程名称: 'raw.githubusercontent.com'</span>
说明你的**网络连接**有问题,请自行研究解决:
3. 安装 Nodejs 和 yarn 以及其他依赖环境
```powershell
scoop install nodejs
scoop install git
npm install yarn -D
```
4. 克隆此仓库至本地,而后安装 Enjoy APP
```powershell
cd ~
mkdir github
cd github
git clone https://github.com/xiaolai/everyone-can-use-english
cd everyone-can-use-english
cd enjoy
yarn install
yarn start:enjoy
```
出现 `Completed in XXXXXXXXXX` 类似字样说明安装成功!
5. 运行 Enjoy APP ,在终端执行下列命令:
```powershell
yarn start:enjoy
```

View File

@@ -395,7 +395,7 @@ One of the reasons why many parents want to send their children to separate scho
* 名词单复数形式错误:错误地使用了名词的数,包括:使用了不可数名词的 “复数” 形式,使用了集合名词的 “复数” 形式,在应该使用复数的地方使用了单数名词(或反之)等。
* 单数可数名词未受限定:句子中出现的单数可数名词之前没有使用限定词,包括冠词、不定代词、指示代词、名词或代词所有格、数词与某些形容词性的物主代词。
* 词性错误:在选择词汇的过程中忽略了英文词性的特性,仅按照含义来使用词汇,从而发生了词性使用错误的现象。
* 修饰关系错误:违反了词汇修饰的规则,采用了不恰当的修饰关系。包括用形容词修饰动词、形容词修饰形容词,副词修饰名词等。
* 修饰关系错误:违反了词汇修饰的规则,采用了不恰当的修饰关系。包括用* 形容词修饰动词、形容词修饰形容词,副词修饰名词等。
* 搭配错误:句子中出现了不合适的词汇修饰、限制、说明现象,或者错误地使用了固有的词汇搭配形式。
* 词序错误:未使用正确的、符合习惯的表述语序来对内容进行陈述。其中包括修饰词顺序错误,该倒装时没有倒装等。
* 非谓语动词使用错误:错误地使用了现在分词、过去分词、或动词的不定式。其中包括:
@@ -445,9 +445,9 @@ Style: Toward Clarity and Grace by Joseph M. Williams
几乎所有真正有效的学习手段都是简单、廉价、往往并不直接但却真正有效的。复述,就是这样的有效手段。
每个文化中的每个人在这方面都一样 —— 终其一生绝大多数情况下都在复述别人说过的话。首先语言文字很难纯粹 “原创”,其次绝大多数情况下确实也没有必要 “独一无二”。更为重要的是,第二语言学习者的目标绝大多数情况下不是为了从事诗人、小说家之类的职业,而是希望多掌握一门用来承载信息沟通交流的工具 —— 这种情况下 “复述” 几乎占据了第二语言应用的全部。
每个文化中的每个人在这方面都一样 —— 终其一生绝大多数情况下都在复述别人说过的话。首先语言文字很难纯粹 “原创”,其次绝大多数情况下确实也没有 必要 “独一无二”。更为重要的是,第二语言学习者的目标绝大多数情况下不是为了从事诗人、小说家之类的职业,而是希望多掌握一门用来承载信息沟通交流的工 具 —— 这种情况下 “复述” 几乎占据了第二语言应用的全部。
这还真的并不是那么 “显而易见” 的事实。ETS 在设计并举办 TOEFL 考试几十年之后才 “恍然大悟” 地在新托福考试中大面积添加了 “复述能力” 的考量TOEFL 作文部分中有综合测试,要求考生先读一篇文章,然后再听一篇与刚刚读过的文章相关的讲座,而后复述讲座内容以及讲座内容是如何与阅读文章内容相联系的;口语部分中有先听再说,先读再说,听与读之后再说 —— 无一不是在考量考生的 “复述能力”。
这还真的并不是那么 “显而易见” 的事实。ETS 在设计并举办 TOEFL 考试几十年之后才 “恍然大悟” 地在新托福考试中大面积添加了 “复述能力” 的考 TOEFL 作文部分中有综合测试,要求考生先读一篇文章,然后再听一篇与刚刚读过的文章相关的讲座,而后复述讲座内容以及讲座内容是如何与阅读文章内 容相联系的;口语部分中有先听再说,先读再说,听与读之后再说 —— 无一不是在考量考生的 “复述能力”。
## 9. 貌似多余:其实连哑巴英语都并不那么坏

View File

@@ -7,7 +7,7 @@
"config": "tailwind.config.js",
"css": "src/index.css",
"baseColor": "zinc",
"cssVariables": false
"cssVariables": true
},
"aliases": {
"components": "src/renderer/components",

View File

@@ -2,7 +2,6 @@ import type { ForgeConfig } from "@electron-forge/shared-types";
import { MakerSquirrel } from "@electron-forge/maker-squirrel";
import { MakerZIP } from "@electron-forge/maker-zip";
import { MakerDeb } from "@electron-forge/maker-deb";
import { MakerRpm } from "@electron-forge/maker-rpm";
import { VitePlugin } from "@electron-forge/plugin-vite";
import { dirname } from "node:path";
import { Walker, DepType, type Module } from "flora-colossus";
@@ -45,22 +44,14 @@ const config: ForgeConfig = {
mimeType: ["x-scheme-handler/enjoy"],
},
}),
new MakerRpm({
options: {
name: "enjoy",
productName: "Enjoy",
icon: "./assets/icon.png",
mimeType: ["x-scheme-handler/enjoy"],
},
}),
],
publishers: [
{
name: "@electron-forge/publisher-github",
config: {
repository: {
owner: "xiaolai",
name: "everyone-can-use-english",
owner: "an-lee",
name: "enjoy",
},
draft: true,
},

View File

@@ -2,7 +2,7 @@
"private": true,
"name": "enjoy",
"productName": "Enjoy",
"version": "0.1.0-alpha.3",
"version": "0.1.0-alpha",
"description": "Enjoy desktop app",
"main": ".vite/build/main.js",
"types": "./src/types.d.ts",
@@ -117,7 +117,6 @@
"lucide-react": "^0.308.0",
"mark.js": "^8.11.1",
"microsoft-cognitiveservices-speech-sdk": "^1.34.0",
"next-themes": "^0.2.1",
"openai": "^4.24.1",
"pitchfinder": "^2.3.2",
"postcss": "^8.4.33",
@@ -134,7 +133,6 @@
"rimraf": "^5.0.5",
"sequelize": "^6.35.2",
"sequelize-typescript": "^2.1.6",
"sonner": "^1.3.1",
"sqlite3": "^5.1.7",
"tailwind-scrollbar-hide": "^1.1.7",
"umzug": "^3.5.0",

View File

@@ -1,259 +0,0 @@
import axios, { AxiosInstance } from "axios";
import decamelizeKeys from "decamelize-keys";
import camelcaseKeys from "camelcase-keys";
const ONE_MINUTE = 1000 * 60; // 1 minute
export class Client {
public api: AxiosInstance;
public baseUrl: string;
public logger: any;
constructor(options: {
baseUrl: string;
accessToken?: string;
logger?: any;
}) {
const { baseUrl, accessToken, logger } = options;
this.baseUrl = baseUrl;
this.logger = logger || console;
this.api = axios.create({
baseURL: baseUrl,
timeout: ONE_MINUTE,
headers: {
"Content-Type": "application/json",
},
});
this.api.interceptors.request.use((config) => {
config.headers.Authorization = `Bearer ${accessToken}`;
this.logger.debug(
config.method.toUpperCase(),
config.baseURL + config.url,
config.data,
config.params
);
return config;
});
this.api.interceptors.response.use(
(response) => {
this.logger.debug(
response.status,
response.config.method.toUpperCase(),
response.config.baseURL + response.config.url
);
return camelcaseKeys(response.data, { deep: true });
},
(err) => {
if (err.response) {
this.logger.error(
err.response.status,
err.response.config.method.toUpperCase(),
err.response.config.baseURL + err.response.config.url
);
this.logger.error(err.response.data);
return Promise.reject(err.response.data);
}
if (err.request) {
this.logger.error(err.request);
} else {
this.logger.error(err.message);
}
return Promise.reject(err);
}
);
}
auth(params: { provider: string; code: string }): Promise<UserType> {
return this.api.post("/api/sessions", decamelizeKeys(params));
}
me(): Promise<UserType> {
return this.api.get("/api/me");
}
rankings(range: "day" | "week" | "month" | "year" | "all" = "day"): Promise<{
rankings: UserType[];
range: string;
}> {
return this.api.get("/api/users/rankings", { params: { range } });
}
posts(params?: { page?: number; items?: number }): Promise<
{
posts: PostType[];
} & PagyResponseType
> {
return this.api.get("/api/posts", { params: decamelizeKeys(params) });
}
post(id: string): Promise<PostType> {
return this.api.get(`/api/posts/${id}`);
}
createPost(params: {
metadata?: PostType["metadata"];
targetType?: string;
targetId?: string;
}): Promise<PostType> {
return this.api.post("/api/posts", decamelizeKeys(params));
}
updatePost(id: string, params: { content: string }): Promise<PostType> {
return this.api.put(`/api/posts/${id}`, decamelizeKeys(params));
}
deletePost(id: string): Promise<void> {
return this.api.delete(`/api/posts/${id}`);
}
transcriptions(params?: {
page?: number;
items?: number;
targetId?: string;
targetType?: string;
targetMd5?: string;
}): Promise<
{
transcriptions: TranscriptionType[];
} & PagyResponseType
> {
return this.api.get("/api/transcriptions", {
params: decamelizeKeys(params),
});
}
syncAudio(audio: Partial<AudioType>) {
return this.api.post("/api/mine/audios", decamelizeKeys(audio));
}
syncVideo(video: Partial<VideoType>) {
return this.api.post("/api/mine/videos", decamelizeKeys(video));
}
syncTranscription(transcription: Partial<TranscriptionType>) {
return this.api.post("/api/transcriptions", decamelizeKeys(transcription));
}
syncRecording(recording: Partial<RecordingType>) {
if (!recording) return;
return this.api.post("/api/mine/recordings", decamelizeKeys(recording));
}
generateSpeechToken(): Promise<{ token: string; region: string }> {
return this.api.post("/api/speech/tokens");
}
syncPronunciationAssessment(
pronunciationAssessment: Partial<PronunciationAssessmentType>
) {
if (!pronunciationAssessment) return;
return this.api.post(
"/api/mine/pronunciation_assessments",
decamelizeKeys(pronunciationAssessment)
);
}
recordingAssessment(id: string) {
return this.api.get(`/api/mine/recordings/${id}/assessment`);
}
lookup(params: {
word: string;
context: string;
sourceId?: string;
sourceType?: string;
}): Promise<LookupType> {
return this.api.post("/api/lookups", decamelizeKeys(params));
}
lookupInBatch(
lookups: {
word: string;
context: string;
sourceId?: string;
sourceType?: string;
}[]
): Promise<{ successCount: number; errors: string[]; total: number }> {
return this.api.post("/api/lookups/batch", {
lookups: decamelizeKeys(lookups, { deep: true }),
});
}
extractVocabularyFromStory(storyId: string): Promise<string[]> {
return this.api.post(`/api/stories/${storyId}/extract_vocabulary`);
}
storyMeanings(
storyId: string,
params?: {
page?: number;
items?: number;
storyId?: string;
}
): Promise<
{
meanings: MeaningType[];
pendingLookups?: LookupType[];
} & PagyResponseType
> {
return this.api.get(`/api/stories/${storyId}/meanings`, {
params: decamelizeKeys(params),
});
}
mineMeanings(params?: {
page?: number;
items?: number;
sourceId?: string;
sourceType?: string;
status?: string;
}): Promise<
{
meanings: MeaningType[];
} & PagyResponseType
> {
return this.api.get("/api/mine/meanings", {
params: decamelizeKeys(params),
});
}
createStory(params: CreateStoryParamsType): Promise<StoryType> {
return this.api.post("/api/stories", decamelizeKeys(params));
}
story(id: string): Promise<StoryType> {
return this.api.get(`/api/stories/${id}`);
}
stories(params?: { page: number }): Promise<
{
stories: StoryType[];
} & PagyResponseType
> {
return this.api.get("/api/stories", { params: decamelizeKeys(params) });
}
mineStories(params?: { page: number }): Promise<
{
stories: StoryType[];
} & PagyResponseType
> {
return this.api.get("/api/mine/stories", {
params: decamelizeKeys(params),
});
}
starStory(storyId: string): Promise<{ starred: boolean }> {
return this.api.post(`/api/mine/stories`, decamelizeKeys({ storyId }));
}
unstarStory(storyId: string): Promise<{ starred: boolean }> {
return this.api.delete(`/api/mine/stories/${storyId}`);
}
}

View File

@@ -1 +0,0 @@
export * from "./client";

View File

@@ -32,9 +32,9 @@ export const WHISPER_MODELS_OPTIONS = [
},
{
type: "large",
name: "ggml-large-v3.bin",
name: "ggml-large.bin",
size: "3.09 GB",
url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3.bin",
url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large.bin",
},
];
@@ -46,3 +46,35 @@ export const PROCESS_TIMEOUT = 1000 * 60 * 15;
export const AI_GATEWAY_ENDPOINT =
"https://gateway.ai.cloudflare.com/v1/11d43ab275eb7e1b271ba4089ecc3864/enjoy";
export const CONVERSATION_PRESET_SCENARIOS: {
scenario: string;
autoSpeech: boolean;
prompt: string;
}[] = [
{
scenario: "translation",
autoSpeech: false,
prompt: `Act as a translation machine that converts any language input I provide into fluent, idiomatic American English. If the input is already in English, refine it to sound like native American English.
Suggestions:
Ensure that the translation maintains the original meaning and tone of the input as much as possible.
In case of English inputs, focus on enhancing clarity, grammar, and style to match American English standards.
Return the translation only, no other words needed.
`,
},
{
scenario: "vocal_coach",
autoSpeech: true,
prompt: `As an AI English vocal coach with an American accent, engage in a conversation with me to help improve my spoken English skills. Use the appropriate tone and expressions that a native American English speaker would use, keeping in mind that your responses will be converted to audio.
Suggestions:
Use common American idioms and phrases to give a more authentic experience of American English.
Provide corrections and suggestions for improvement in a supportive and encouraging manner.
Use a variety of sentence structures and vocabulary to expose me to different aspects of the language.`,
},
];

View File

@@ -86,8 +86,7 @@
"ttsVoice": "TTS voice",
"ttsBaseUrl": "TTS base URL",
"notFound": "Conversation not found",
"contentRequired": "Content required",
"failedToGenerateResponse": "Failed to generate response"
"contentRequired": "Content required"
},
"pronunciationAssessment": {
"pronunciationScore": "Pronunciation Score",
@@ -123,7 +122,6 @@
},
"sidebar": {
"home": "Home",
"community": "Community",
"audios": "Audios",
"videos": "Videos",
"stories": "Stories",
@@ -190,27 +188,13 @@
"AIModel": "AI Model",
"chooseAIModelToDownload": "Choose AI Model to download",
"ffmpegCheck": "FFmpeg Check",
"check": "Check",
"ffmpegCommandIsWorking": "FFmpeg command is working",
"ffmpegCommandIsNotWorking": "FFmpeg command is not working",
"scan": "Scan",
"checkIfFfmpegIsInstalled": "Check if FFmpeg is installed",
"ffmpegFoundAt": "FFmpeg found at {{path}}",
"ffmpegNotFound": "FFmpeg not found",
"ffmpegInstallSteps": "FFmpeg Install Steps",
"Install": "Install",
"runTheFollowingCommandInTerminal": "Run the following command in terminal",
"click": "Click",
"willAutomaticallyFindFFmpeg": "Enjoy will automatically find FFmpeg",
"tryingToFindValidFFmepgInTheseDirectories": "Trying to find valid FFmpeg in these directories: {{dirs}}",
"invalidFfmpegPath": "Invalid FFmpeg path",
"usingInstalledFFmpeg": "Using installed FFmpeg",
"usingDownloadedFFmpeg": "Using downloaded FFmpeg",
"ffmpegInstalled": "FFmpeg is installed",
"ffmpegNotInstalled": "FFmpeg is not installed.",
"downloadFfmpeg": "Download FFmpeg",
"youAreReadyToGo": "You are ready to go",
"welcomeBack": "Welcome back! {{name}}",
"download": "Download",
"downloading": "Downloading {{file}}",
"chooseAIModelDependingOnYourHardware": "Choose AI Model depending on your hardware",
"areYouSureToDownload": "Are you sure to download {{name}}?",
"yourModelsWillBeDownloadedTo": "Your models will be downloaded to {{path}}",
@@ -219,10 +203,7 @@
"reset": "Reset",
"resetAll": "Reset All",
"resetAllConfirmation": "It will remove all of your personal data, are you sure?",
"resetSettings": "Reset Settings",
"resetSettingsConfirmation": "It will reset all of your settings, are you sure? The library will not be affected.",
"logoutAndRemoveAllPersonalData": "Logout and remove all personal data",
"logoutAndRemoveAllPersonalSettings": "Logout and remove all personal settings",
"hotkeys": "Hotkeys",
"quitApp": "Quit APP",
"openPreferences": "Open preferences",
@@ -256,7 +237,7 @@
"recentlyAdded": "recently added",
"recommended": "recommended",
"resourcesRecommendedByEnjoy": "resources recommended by Enjoy Bot",
"fromCommunity": "from community",
"fromCommunity": "from commnuity",
"videoResources": "video resources",
"audioResources": "audio resources",
"seeMore": "see more",
@@ -281,12 +262,8 @@
"recordingActivity": "recording activity",
"recordingDetail": "Recording detail",
"noRecordingActivities": "no recording activities",
"basicSettingsShort": "Basic",
"basicSettings": "Basic settings",
"advancedSettingsShort": "Advanced",
"advancedSettings": "Advanced settings",
"advanced": "Advanced",
"language": "Language",
"basicSettings": "basic",
"advancedSettings": "advanced",
"sttAiModel": "STT AI model",
"relaunchIsNeededAfterChanged": "Relaunch is needed after changed",
"openaiKeySaved": "OpenAI key saved",
@@ -321,7 +298,7 @@
"score": "score",
"inputUrlToStartReading": "Input url to start reading",
"read": "read",
"addStory": "add story",
"add_story": "add story",
"context": "context",
"keyVocabulary": "key vocabulary",
"addedStories": "added stories",
@@ -343,44 +320,5 @@
"presenter": "presenter",
"downloadAudio": "Download audio",
"downloadVideo": "Download video",
"recordTooShort": "Record too short",
"rankings": "Rankings",
"dayRankings": "Day rankings",
"weekRankings": "Week rankings",
"monthRankings": "Month rankings",
"allRankings": "All time rankings",
"noOneHasRecordedYet": "No one has recorded yet",
"activities": "Activities",
"square": "Square",
"noOneSharedYet": "No one shared yet",
"sharedSuccessfully": "Shared successfully",
"shareFailed": "Share failed",
"shareAudio": "Share audio",
"sharedAudio": "Shared an audio resource",
"areYouSureToShareThisAudioToCommunity": "Are you sure to share this audio to community?",
"shareVideo": "Share video",
"sharedVideo": "Shared a video resource",
"cannotShareLocalVideo": "Cannot share local video",
"areYouSureToShareThisVideoToCommunity": "Are you sure to share this video to community?",
"sharePrompt": "Share prompt",
"sharedPrompt": "Shared a prompt",
"areYouSureToShareThisPromptToCommunity": "Are you sure to share this prompt to community?",
"shareRecording": "Share recording",
"sharedRecording": "Shared a recording",
"areYouSureToShareThisRecordingToCommunity": "Are you sure to share this recording to community?",
"shareStory": "Share story",
"sharedStory": "Shared a story",
"areYouSureToShareThisStoryToCommunity": "Are you sure to share this story to community?",
"addToLibary": "Add to library",
"areYouSureToAddThisVideoToYourLibrary": "Are you sure to add this video to library?",
"areYouSureToAddThisAudioToYourLibrary": "Are you sure to add this audio to library?",
"audioAlreadyAddedToLibrary": "Audio already added to library",
"videoAlreadyAddedToLibrary": "Video already added to library",
"audioSuccessfullyAddedToLibrary": "Audio successfully added to library",
"videoSuccessfullyAddedToLibrary": "Video successfully added to library",
"sendToAIAssistant": "Send to AI assistant",
"removeSharing": "Remove sharing",
"areYouSureToRemoveThisSharing": "Are you sure to remove this sharing?",
"removeSharingSuccessfully": "Remove sharing successfully",
"removeSharingFailed": "Remove sharing failed"
"recordTooShort": "Record too short"
}

View File

@@ -86,8 +86,7 @@
"ttsVoice": "TTS 声音",
"ttsBaseUrl": "TTS 请求地址",
"notFound": "未找到对话",
"contentRequired": "对话内容不能为空",
"failedToGenerateResponse": "生成失败"
"contentRequired": "对话内容不能为空"
},
"pronunciationAssessment": {
"pronunciationScore": "发音得分",
@@ -123,7 +122,6 @@
},
"sidebar": {
"home": "主页",
"community": "社区",
"audios": "音频",
"videos": "视频",
"stories": "文章",
@@ -190,27 +188,13 @@
"AIModel": "AI 模型",
"chooseAIModelToDownload": "选择 AI 模型下载",
"ffmpegCheck": "FFmpeg 检查",
"check": "检查",
"ffmpegCommandIsWorking": "FFmpeg 命令正常工作",
"ffmpegCommandIsNotWorking": "FFmpeg 命令无法正常工作",
"scan": "查找",
"checkIfFfmpegIsInstalled": "检查 FFmpeg 是否已正确安装",
"ffmpegFoundAt": "检测到 FFmpeg 命令: {{path}}",
"ffmpegNotFound": "未检测到可用的 FFmpeg 命令",
"ffmpegInstallSteps": "FFmpeg 安装步骤",
"Install": "安装",
"runTheFollowingCommandInTerminal": "在终端中运行以下命令",
"click": "点击",
"willAutomaticallyFindFFmpeg": "Enjoy 将自动检测 FFmpeg 命令",
"tryingToFindValidFFmepgInTheseDirectories": "正在尝试在以下目录中查找有效的 FFmpeg 命令: {{dirs}}",
"invalidFfmpegPath": "无效的 FFmpeg 路径",
"usingInstalledFFmpeg": "使用已安装的 FFmpeg",
"usingDownloadedFFmpeg": "使用下载的 FFmpeg",
"ffmpegInstalled": "FFmpeg 已经安装",
"ffmpegNotInstalled": "FFmpeg 未安装,软件部分功能依赖于 FFmpeg",
"downloadFfmpeg": "下载 FFmpeg",
"youAreReadyToGo": "您已准备就绪",
"welcomeBack": "欢迎回来, {{name}}",
"download": "下载",
"downloading": "正在下载 {{file}}",
"chooseAIModelDependingOnYourHardware": "根据您的硬件选择合适的 AI 模型",
"areYouSureToDownload": "您确定要下载 {{name}} 吗?",
"yourModelsWillBeDownloadedTo": "您的模型将下载到目录 {{path}}",
@@ -219,10 +203,7 @@
"reset": "重置",
"resetAll": "重置所有",
"resetAllConfirmation": "这将删除您的所有个人数据, 您确定要重置吗?",
"resetSettings": "重置设置选项",
"resetSettingsConfirmation": "您确定要重置个人设置选项吗?资料库不会受影响。",
"logoutAndRemoveAllPersonalData": "退出登录并删除所有个人数据",
"logoutAndRemoveAllPersonalSettings": "退出登录并删除所有个人设置选项",
"hotkeys": "快捷键",
"quitApp": "退出应用",
"openPreferences": "打开设置",
@@ -281,11 +262,8 @@
"recordingActivity": "练习活动",
"recordingDetail": "录音详情",
"noRecordingActivities": "没有练习活动",
"basicSettingsShort": "基本设置",
"basicSettings": "基本设置",
"advancedSettingsShort": "高级设置",
"advancedSettings": "高级设置",
"language": "语言",
"sttAiModel": "语音转文本 AI 模型",
"relaunchIsNeededAfterChanged": "更改后需要重新启动",
"openaiKeySaved": "OpenAI 密钥已保存",
@@ -320,7 +298,7 @@
"score": "得分",
"inputUrlToStartReading": "输入 URL 开始阅读",
"read": "阅读",
"addStory": "添加文章",
"add_story": "添加文章",
"context": "原文",
"keyVocabulary": "关键词汇",
"addedStories": "添加的文章",
@@ -342,44 +320,5 @@
"presenter": "讲者",
"downloadAudio": "下载音频",
"downloadVideo": "下载视频",
"recordTooShort": "录音时长太短",
"rankings": "排行榜",
"dayRankings": "日排行榜",
"weekRankings": "周排行榜",
"monthRankings": "月排行榜",
"allRankings": "总排行榜",
"noOneHasRecordedYet": "还没有人练习",
"activities": "动态",
"square": "广场",
"noOneSharedYet": "还没有人分享",
"sharedSuccessfully": "分享成功",
"sharedFailed": "分享失败",
"shareAudio": "分享音频",
"sharedAudio": "分享了一个音频材料",
"areYouSureToShareThisAudioToCommunity": "您确定要分享此音频到社区吗?",
"shareVideo": "分享视频",
"sharedVideo": "分享了一个视频材料",
"cannotShareLocalVideo": "无法分享本地视频",
"areYouSureToShareThisVideoToCommunity": "您确定要分享此视频到社区吗?",
"sharePrompt": "分享提示语",
"sharedPrompt": "分享了一条提示语",
"areYouSureToShareThisPromptToCommunity": "您确定要分享此提示语到社区吗?",
"shareRecording": "分享录音",
"sharedRecording": "分享了一条录音",
"areYouSureToShareThisRecordingToCommunity": "您确定要分享此录音到社区吗?",
"shareStory": "分享文章",
"sharedStory": "分享了一篇文章",
"areYouSureToShareThisStoryToCommunity": "您确定要分享此文章到社区吗?",
"addToLibary": "添加到资源库",
"areYouSureToAddThisVideoToYourLibrary": "您确定要添加此视频到资料库吗?",
"areYouSureToAddThisAudioToYourLibrary": "您确定要添加此音频到资料库吗?",
"audioAlreadyAddedToLibrary": "资料库已经存在此音频",
"videoAlreadyAddedToLibrary": "资料库已经存在此视频",
"audioSuccessfullyAddedToLibrary": "音频成功添加到资料库",
"videoSuccessfullyAddedToLibrary": "视频成功添加到资料库",
"sendToAIAssistant": "发送到智能助手",
"removeSharing": "取消分享",
"areYouSureToRemoveThisSharing": "您确定要取消分享吗?",
"removeSharingSuccessfully": "取消分享成功",
"removeSharingFailed": "取消分享失败"
"recordTooShort": "录音时长太短"
}

View File

@@ -90,29 +90,27 @@ class AudiosHandler {
private async create(
event: IpcMainEvent,
uri: string,
source: string,
params: {
name?: string;
coverUrl?: string;
} = {}
) {
let file = uri;
let source;
if (uri.startsWith("http")) {
let file = source;
if (source.startsWith("http")) {
try {
if (youtubedr.validateYtURL(uri)) {
file = await youtubedr.autoDownload(uri);
if (youtubedr.validateYtURL(source)) {
file = await youtubedr.autoDownload(source);
} else {
file = await downloader.download(uri, {
file = await downloader.download(source, {
webContents: event.sender,
});
}
if (!file) throw new Error("Failed to download file");
source = uri;
} catch (err) {
return event.sender.send("on-notification", {
type: "error",
message: t("models.audio.failedToDownloadFile", { file: uri }),
message: t("models.audio.failedToDownloadFile", { file: source }),
});
}
}

View File

@@ -1,6 +1,5 @@
import { ipcMain, IpcMainEvent } from "electron";
import { CacheObject } from "@main/db/models";
import db from "@main/db";
class CacheObjectsHandler {
private async get(event: IpcMainEvent, key: string) {
@@ -50,7 +49,6 @@ class CacheObjectsHandler {
private async clear(event: IpcMainEvent) {
return CacheObject.destroy({ where: {} })
.then(() => {
db.connection.query("VACUUM");
return;
})
.catch((err) => {

View File

@@ -90,30 +90,27 @@ class VideosHandler {
private async create(
event: IpcMainEvent,
uri: string,
source: string,
params: {
name?: string;
coverUrl?: string;
md5?: string;
} = {}
) {
let file = uri;
let source;
if (uri.startsWith("http")) {
let file = source;
if (source.startsWith("http")) {
try {
if (youtubedr.validateYtURL(uri)) {
file = await youtubedr.autoDownload(uri);
if (youtubedr.validateYtURL(source)) {
file = await youtubedr.autoDownload(source);
} else {
file = await downloader.download(uri, {
file = await downloader.download(source, {
webContents: event.sender,
});
}
if (!file) throw new Error("Failed to download file");
source = uri;
} catch (err) {
return event.sender.send("on-notification", {
type: "error",
message: t("models.video.failedToDownloadFile", { file: uri }),
message: t("models.video.failedToDownloadFile", { file: source }),
});
}
}

View File

@@ -51,6 +51,9 @@ db.connect = async () => {
db.connection = sequelize;
// vacuum the database
await sequelize.query("VACUUM");
const umzug = new Umzug({
migrations: { glob: __dirname + "/migrations/*.js" },
context: sequelize.getQueryInterface(),
@@ -65,23 +68,6 @@ db.connect = async () => {
await sequelize.sync();
await sequelize.authenticate();
// TODO:
// clear the large waveform data in DB.
// Remove this in next release
const caches = await CacheObject.findAll({
attributes: ["id", "key"],
});
const cacheIds: string[] = [];
caches.forEach((cache) => {
if (cache.key.startsWith("waveform")) {
cacheIds.push(cache.id);
}
});
await CacheObject.destroy({ where: { id: cacheIds } });
// vacuum the database
await sequelize.query("VACUUM");
// register handlers
audiosHandler.register();
cacheObjectsHandler.register();

View File

@@ -25,21 +25,13 @@ import mainWindow from "@main/window";
import log from "electron-log/main";
import storage from "@main/storage";
import Ffmpeg from "@main/ffmpeg";
import { Client } from "@/api";
import { WEB_API_URL } from "@/constants";
import webApi from "@main/web-api";
import { startCase } from "lodash";
import { v5 as uuidv5 } from "uuid";
const SIZE_LIMIT = 1024 * 1024 * 50; // 50MB
const logger = log.scope("db/models/audio");
const webApi = new Client({
baseUrl: process.env.WEB_API_URL || WEB_API_URL,
accessToken: settings.getSync("user.accessToken") as string,
logger: log.scope("api/client"),
});
@Table({
modelName: "Audio",
tableName: "audios",

View File

@@ -13,7 +13,7 @@ import {
AllowNull,
} from "sequelize-typescript";
import { Message, Speech } from "@main/db/models";
import { ChatMessageHistory, BufferMemory } from "langchain/memory";
import { ChatMessageHistory , BufferMemory } from "langchain/memory";
import { ConversationChain } from "langchain/chains";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { ChatOllama } from "langchain/chat_models/ollama";
@@ -267,14 +267,13 @@ export class Conversation extends Model<Conversation> {
const chain = await this.chain();
let response: Generation[] = [];
const result = await chain.call({ input: content }, [
await chain.call({ input: content }, [
{
handleLLMEnd: async (output) => {
response = output.generations[0];
},
},
]);
logger.debug("LLM result:", result);
if (!response) {
throw new Error(t("models.conversation.failedToGenerateResponse"));
@@ -295,9 +294,9 @@ export class Conversation extends Model<Conversation> {
}
);
const replies = await Promise.all(
await Promise.all(
response.map(async (generation) => {
return await Message.create(
await Message.create(
{
conversationId: this.id,
role: "assistant",
@@ -331,7 +330,5 @@ export class Conversation extends Model<Conversation> {
}
await transaction.commit();
return replies.map((reply) => reply.toJSON());
}
}

View File

@@ -14,16 +14,7 @@ import {
} from "sequelize-typescript";
import mainWindow from "@main/window";
import { Recording } from "@main/db/models";
import { Client } from "@/api";
import { WEB_API_URL } from "@/constants";
import settings from "@main/settings";
import log from "electron-log/main";
const webApi = new Client({
baseUrl: process.env.WEB_API_URL || WEB_API_URL,
accessToken: settings.getSync("user.accessToken") as string,
logger: log.scope("api/client"),
});
import webApi from "@main/web-api";
@Table({
modelName: "PronunciationAssessment",

View File

@@ -23,19 +23,12 @@ import { hashFile } from "@/utils";
import log from "electron-log/main";
import storage from "@main/storage";
import Ffmpeg from "@main/ffmpeg";
import { Client } from "@/api";
import { WEB_API_URL } from "@/constants";
import webApi from "@main/web-api";
import { AzureSpeechSdk } from "@main/azure-speech-sdk";
import camelcaseKeys from "camelcase-keys";
const logger = log.scope("db/models/recording");
const webApi = new Client({
baseUrl: process.env.WEB_API_URL || WEB_API_URL,
accessToken: settings.getSync("user.accessToken") as string,
logger: log.scope("api/client"),
});
@Table({
modelName: "Recording",
tableName: "recordings",
@@ -43,7 +36,7 @@ const webApi = new Client({
timestamps: true,
})
export class Recording extends Model<Recording> {
@IsUUID("all")
@IsUUID('all')
@Default(DataType.UUIDV4)
@Column({ primaryKey: true, type: DataType.UUID })
id: string;

View File

@@ -2,7 +2,6 @@ import {
AfterCreate,
AfterUpdate,
AfterDestroy,
AfterFind,
BelongsTo,
Table,
Column,
@@ -16,17 +15,9 @@ import { Audio, Video } from "@main/db/models";
import whisper from "@main/whisper";
import mainWindow from "@main/window";
import log from "electron-log/main";
import { Client } from "@/api";
import { WEB_API_URL, PROCESS_TIMEOUT } from "@/constants";
import settings from "@main/settings";
import webApi from "@main/web-api";
const logger = log.scope("db/models/transcription");
const webApi = new Client({
baseUrl: process.env.WEB_API_URL || WEB_API_URL,
accessToken: settings.getSync("user.accessToken") as string,
logger: log.scope("api/client"),
});
@Table({
modelName: "Transcription",
tableName: "transcriptions",
@@ -34,7 +25,7 @@ const webApi = new Client({
timestamps: true,
})
export class Transcription extends Model<Transcription> {
@IsUUID("all")
@IsUUID('all')
@Default(DataType.UUIDV4)
@Column({ primaryKey: true, type: DataType.UUID })
id: string;
@@ -155,23 +146,6 @@ export class Transcription extends Model<Transcription> {
this.notify(transcription, "destroy");
}
@AfterFind
static expireProcessingState(transcription: Transcription) {
if (transcription?.state !== "processing") return;
if (transcription.updatedAt.getTime() + PROCESS_TIMEOUT < Date.now()) {
if (transcription.result) {
transcription.update({
state: "finished",
});
} else {
transcription.update({
state: "pending",
});
}
}
}
static notify(
transcription: Transcription,
action: "create" | "update" | "destroy"

View File

@@ -25,21 +25,13 @@ import mainWindow from "@main/window";
import log from "electron-log/main";
import storage from "@main/storage";
import Ffmpeg from "@main/ffmpeg";
import { Client } from "@/api";
import { WEB_API_URL } from "@/constants";
import webApi from "@main/web-api";
import { startCase } from "lodash";
import { v5 as uuidv5 } from "uuid";
const SIZE_LIMIT = 1024 * 1024 * 100; // 100MB
const logger = log.scope("db/models/video");
const webApi = new Client({
baseUrl: process.env.WEB_API_URL || WEB_API_URL,
accessToken: settings.getSync("user.accessToken") as string,
logger: log.scope("api/client"),
});
@Table({
modelName: "Video",
tableName: "videos",

View File

@@ -7,47 +7,26 @@ import fs from "fs-extra";
import AdmZip from "adm-zip";
import downloader from "@main/downloader";
import storage from "@main/storage";
import readdirp from "readdirp";
import { t } from "i18next";
const logger = log.scope("ffmpeg");
const logger = log.scope("ffmepg");
export default class FfmpegWrapper {
public ffmpeg: Ffmpeg.FfmpegCommand;
public config: any;
constructor(config?: {
ffmpegPath: string;
ffprobePath: string;
commandExists?: boolean;
}) {
this.config = config || settings.ffmpegConfig();
constructor() {
const config = settings.ffmpegConfig();
if (this.config.commandExists) {
if (config.commandExists) {
logger.info("Using system ffmpeg");
this.ffmpeg = Ffmpeg();
} else {
logger.info("Using downloaded ffmpeg");
const ff = Ffmpeg();
ff.setFfmpegPath(this.config.ffmpegPath);
ff.setFfprobePath(this.config.ffprobePath);
ff.setFfmpegPath(config.ffmpegPath);
ff.setFfprobePath(config.ffprobePath);
this.ffmpeg = ff;
}
}
checkCommand(): Promise<boolean> {
return new Promise((resolve, _reject) => {
this.ffmpeg.getAvailableFormats((err, formats) => {
if (err) {
logger.error("Command not valid:", err);
resolve(false);
} else {
logger.info("Command valid, available formats:", formats);
resolve(true);
}
});
});
}
generateMetadata(input: string): Promise<Ffmpeg.FfprobeData> {
return new Promise((resolve, reject) => {
this.ffmpeg
@@ -313,118 +292,9 @@ export class FfmpegDownloader {
logger.error(err);
event.sender.send("on-notification", {
type: "error",
message: `FFmpeg download failed: ${err.message}`,
});
}
});
ipcMain.handle("ffmpeg-check-command", async (event) => {
const ffmpeg = new FfmpegWrapper();
const valid = await ffmpeg.checkCommand();
if (valid) {
event.sender.send("on-notification", {
type: "success",
message: t("ffmpegCommandIsWorking"),
});
} else {
logger.error("FFmpeg command not valid", ffmpeg.config);
event.sender.send("on-notification", {
type: "warning",
message: t("ffmpegCommandIsNotWorking"),
});
}
return valid;
});
ipcMain.handle("ffmpeg-discover-command", async (event) => {
try {
return await discoverFfmpeg();
} catch (err) {
logger.error(err);
event.sender.send("on-notification", {
type: "error",
message: `FFmpeg discover failed: ${err.message}`,
title: `FFmpeg download failed: ${err.message}`,
});
}
});
}
}
export const discoverFfmpeg = async () => {
const platform = process.platform;
let ffmpegPath: string;
let ffprobePath: string;
const libraryFfmpegPath = path.join(settings.libraryPath(), "ffmpeg");
const scanDirs = [...COMMAND_SCAN_DIR[platform], libraryFfmpegPath];
await Promise.all(
scanDirs.map(async (dir: string) => {
if (!fs.existsSync(dir)) return;
dir = path.resolve(dir);
log.info("FFmpeg scanning: " + dir);
const fileStream = readdirp(dir, {
depth: 3,
});
for await (const entry of fileStream) {
const appName = entry.basename
.replace(".app", "")
.replace(".exe", "")
.toLowerCase();
if (appName === "ffmpeg") {
logger.info("Found ffmpeg: ", entry.fullPath);
ffmpegPath = entry.fullPath;
}
if (appName === "ffprobe") {
logger.info("Found ffprobe: ", entry.fullPath);
ffprobePath = entry.fullPath;
}
if (ffmpegPath && ffprobePath) break;
}
})
);
let valid = false;
if (ffmpegPath && ffprobePath) {
const ffmepg = new FfmpegWrapper({ ffmpegPath, ffprobePath });
valid = await ffmepg.checkCommand();
}
if (valid) {
settings.setSync("ffmpeg", {
ffmpegPath,
ffprobePath,
});
} else {
ffmpegPath = undefined;
ffprobePath = undefined;
settings.setSync("ffmpeg", null);
}
return {
ffmpegPath,
ffprobePath,
scanDirs,
};
};
export const COMMAND_SCAN_DIR: { [key: string]: string[] } = {
darwin: [
"/Applications",
process.env.HOME + "/Applications",
"/opt/homebrew/bin",
],
linux: ["/usr/bin", "/usr/local/bin", "/snap/bin"],
win32: [
process.env.SystemDrive + "\\Program Files\\",
process.env.SystemDrive + "\\Program Files (x86)\\",
process.env.LOCALAPPDATA + "\\Apps\\2.0\\",
],
};

View File

@@ -1,7 +1,6 @@
import * as i18n from "i18next";
import en from "@/i18n/en.json";
import zh_CN from "@/i18n/zh-CN.json";
import settings from "@main/settings";
const resources = {
en: {
@@ -14,9 +13,7 @@ const resources = {
i18n.init({
resources,
lng: settings.language(),
supportedLngs: ["en", "zh-CN"],
fallbackLng: "en",
lng: "zh-CN",
interpolation: {
escapeValue: false, // react already safes from xss
},

View File

@@ -6,25 +6,9 @@ import fs from "fs-extra";
import os from "os";
import commandExists from "command-exists";
import log from "electron-log";
import * as i18n from "i18next";
const logger = log.scope("settings");
const language = () => {
const _language = settings.getSync("language");
if (!_language || typeof _language !== "string") {
settings.setSync("language", "en");
}
return settings.getSync("language") as string;
};
const switchLanguage = (language: string) => {
settings.setSync("language", language);
i18n.changeLanguage(language);
};
const libraryPath = () => {
const _library = settings.getSync("library");
@@ -96,19 +80,32 @@ const userDataPath = () => {
};
const ffmpegConfig = () => {
const ffmpegPath = settings.getSync("ffmpeg.ffmpegPath");
const ffprobePath = settings.getSync("ffmpeg.ffprobePath");
const _ffmpegPath = path.join(
libraryPath(),
"ffmpeg",
os.platform() === "win32" ? "ffmpeg.exe" : "ffmpeg"
);
const _ffprobePath = path.join(
libraryPath(),
"ffmpeg",
os.platform() === "win32" ? "ffprobe.exe" : "ffprobe"
);
const ffmpegPath = fs.existsSync(_ffmpegPath) ? _ffmpegPath : "";
const ffprobePath = fs.existsSync(_ffprobePath) ? _ffprobePath : "";
const _commandExists =
commandExists.sync("ffmpeg") && commandExists.sync("ffprobe");
const ready = Boolean(_commandExists || (ffmpegPath && ffprobePath));
const config = {
os: os.platform(),
arch: os.arch(),
commandExists: _commandExists,
ffmpegPath,
ffprobePath,
ready: Boolean(_commandExists || (ffmpegPath && ffprobePath)),
ready,
};
logger.info("ffmpeg config", config);
@@ -181,14 +178,6 @@ export default {
settings.setSync("ffmpeg.ffmpegPath", config.ffmpegPath);
settings.setSync("ffmpeg.ffprobePath", config.ffrobePath);
});
ipcMain.handle("settings-get-language", (_event) => {
return language();
});
ipcMain.handle("settings-switch-language", (_event, language) => {
switchLanguage(language);
});
},
cachePath,
libraryPath,
@@ -199,7 +188,5 @@ export default {
userDataPath,
dbPath,
ffmpegConfig,
language,
switchLanguage,
...settings,
};

View File

@@ -1,38 +0,0 @@
import { ipcMain } from "electron";
import settings from "@main/settings";
import path from "path";
import fs from "fs-extra";
export class Waveform {
public dir = path.join(settings.libraryPath(), "waveforms");
constructor() {
fs.ensureDirSync(this.dir);
}
find(id: string) {
const file = path.join(this.dir, id + ".waveform.json");
if (fs.existsSync(file)) {
return fs.readJsonSync(file);
} else {
return null;
}
}
save(id: string, data: WaveFormDataType) {
const file = path.join(this.dir, id + ".waveform.json");
fs.writeJsonSync(file, data);
}
registerIpcHandlers() {
ipcMain.handle("waveforms-find", async (_event, id) => {
return this.find(id);
});
ipcMain.handle("waveforms-save", (_event, id, data) => {
return this.save(id, data);
});
}
}

382
enjoy/src/main/web-api.ts Normal file
View File

@@ -0,0 +1,382 @@
import { ipcMain } from "electron";
import axios, { AxiosInstance } from "axios";
import { WEB_API_URL } from "@/constants";
import settings from "@main/settings";
import log from "electron-log/main";
import decamelizeKeys from "decamelize-keys";
import camelcaseKeys from "camelcase-keys";
const logger = log.scope("web-api");
const ONE_MINUTE = 1000 * 60; // 1 minute
class WebApi {
public api: AxiosInstance;
constructor() {
this.api = axios.create({
baseURL: process.env.WEB_API_URL || WEB_API_URL,
timeout: ONE_MINUTE,
headers: {
"Content-Type": "application/json",
},
});
this.api.interceptors.request.use((config) => {
config.headers.Authorization = `Bearer ${settings.getSync(
"user.accessToken"
)}`;
logger.info(
config.method.toUpperCase(),
config.baseURL + config.url,
config.data,
config.params
);
return config;
});
this.api.interceptors.response.use(
(response) => {
logger.info(
response.status,
response.config.method.toUpperCase(),
response.config.baseURL + response.config.url
);
return camelcaseKeys(response.data, { deep: true });
},
(err) => {
if (err.response) {
logger.error(
err.response.status,
err.response.config.method.toUpperCase(),
err.response.config.baseURL + err.response.config.url
);
logger.error(err.response.data);
return Promise.reject(err.response.data);
}
if (err.request) {
logger.error(err.request);
} else {
logger.error(err.message);
}
return Promise.reject(err);
}
);
}
me() {
return this.api.get("/api/me");
}
auth(params: { provider: string; code: string }): Promise<UserType> {
return this.api.post("/api/sessions", decamelizeKeys(params));
}
syncAudio(audio: Partial<AudioType>) {
return this.api.post("/api/mine/audios", decamelizeKeys(audio));
}
syncVideo(video: Partial<VideoType>) {
return this.api.post("/api/mine/videos", decamelizeKeys(video));
}
syncTranscription(transcription: Partial<TranscriptionType>) {
return this.api.post("/api/transcriptions", decamelizeKeys(transcription));
}
syncRecording(recording: Partial<RecordingType>) {
if (!recording) return;
return this.api.post("/api/mine/recordings", decamelizeKeys(recording));
}
generateSpeechToken(): Promise<{ token: string; region: string }> {
return this.api.post("/api/speech/tokens");
}
syncPronunciationAssessment(
pronunciationAssessment: Partial<PronunciationAssessmentType>
) {
if (!pronunciationAssessment) return;
return this.api.post(
"/api/mine/pronunciation_assessments",
decamelizeKeys(pronunciationAssessment)
);
}
recordingAssessment(id: string) {
return this.api.get(`/api/mine/recordings/${id}/assessment`);
}
lookup(params: {
word: string;
context: string;
sourceId?: string;
sourceType?: string;
}): Promise<LookupType> {
return this.api.post("/api/lookups", decamelizeKeys(params));
}
lookupInBatch(
lookups: {
word: string;
context: string;
sourceId?: string;
sourceType?: string;
}[]
): Promise<{ successCount: number; total: number }> {
return this.api.post("/api/lookups/batch", {
lookups: decamelizeKeys(lookups, { deep: true }),
});
}
extractVocabularyFromStory(storyId: string): Promise<string[]> {
return this.api.post(`/api/stories/${storyId}/extract_vocabulary`);
}
storyMeanings(
storyId: string,
params?: {
page?: number;
items?: number;
storyId?: string;
}
): Promise<
{
meanings: MeaningType[];
} & PagyResponseType
> {
return this.api.get(`/api/stories/${storyId}/meanings`, {
params: decamelizeKeys(params),
});
}
mineMeanings(params?: {
page?: number;
items?: number;
sourceId?: string;
sourceType?: string;
status?: string;
}): Promise<
{
meanings: MeaningType[];
} & PagyResponseType
> {
return this.api.get("/api/mine/meanings", {
params: decamelizeKeys(params),
});
}
createStory(params: CreateStoryParamsType): Promise<StoryType> {
return this.api.post("/api/stories", decamelizeKeys(params));
}
story(id: string): Promise<StoryType> {
return this.api.get(`/api/stories/${id}`);
}
stories(params?: { page: number }): Promise<
{
stories: StoryType[];
} & PagyResponseType
> {
return this.api.get("/api/stories", { params: decamelizeKeys(params) });
}
mineStories(params?: { page: number }): Promise<
{
stories: StoryType[];
} & PagyResponseType
> {
return this.api.get("/api/mine/stories", {
params: decamelizeKeys(params),
});
}
starStory(storyId: string) {
return this.api.post(`/api/mine/stories`, decamelizeKeys({ storyId }));
}
unstarStory(storyId: string) {
return this.api.delete(`/api/mine/stories/${storyId}`);
}
registerIpcHandlers() {
ipcMain.handle("web-api-auth", async (event, params) => {
return this.auth(params)
.then((user) => {
return user;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-me", async (event) => {
return this.me()
.then((user) => {
return user;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-lookup", async (event, params) => {
return this.lookup(params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-lookup-in-batch", async (event, params) => {
return this.lookupInBatch(params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-mine-meanings", async (event, params) => {
return this.mineMeanings(params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-create-story", async (event, params) => {
return this.createStory(params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle(
"web-api-extract-vocabulary-from-story",
async (event, storyId) => {
return this.extractVocabularyFromStory(storyId)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
}
);
ipcMain.handle(
"web-api-story-meanings",
async (event, storyId, params) => {
return this.storyMeanings(storyId, params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
}
);
ipcMain.handle("web-api-stories", async (event, params) => {
return this.stories(params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-story", async (event, id) => {
return this.story(id)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-mine-stories", async (event, params) => {
return this.mineStories(params)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-star-story", async (event, id) => {
return this.starStory(id)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
ipcMain.handle("web-api-unstar-story", async (event, id) => {
return this.unstarStory(id)
.then((response) => {
return response;
})
.catch((error) => {
event.sender.send("on-notification", {
type: "error",
message: error.message,
});
});
});
}
}
export default new WebApi();

View File

@@ -14,11 +14,11 @@ import downloader from "@main/downloader";
import whisper from "@main/whisper";
import fs from "fs-extra";
import "@main/i18n";
import webApi from "@main/web-api";
import log from "electron-log/main";
import { WEB_API_URL } from "@/constants";
import { AudibleProvider, TedProvider } from "@main/providers";
import { FfmpegDownloader } from "@main/ffmpeg";
import { Waveform } from "./waveform";
log.initialize({ preload: true });
const logger = log.scope("window");
@@ -26,7 +26,6 @@ const logger = log.scope("window");
const audibleProvider = new AudibleProvider();
const tedProvider = new TedProvider();
const ffmpegDownloader = new FfmpegDownloader();
const waveform = new Waveform();
const main = {
win: null as BrowserWindow | null,
@@ -39,6 +38,8 @@ main.init = () => {
return;
}
webApi.registerIpcHandlers();
// Prepare local database
db.registerIpcHandlers();
@@ -48,9 +49,6 @@ main.init = () => {
// Whisper
whisper.registerIpcHandlers();
// Waveform
waveform.registerIpcHandlers();
// Downloader
downloader.registerIpcHandlers();
@@ -221,14 +219,6 @@ main.init = () => {
// App options
ipcMain.handle("app-reset", () => {
fs.removeSync(settings.userDataPath());
fs.removeSync(settings.file());
app.relaunch();
app.exit();
});
ipcMain.handle("app-reset-settings", () => {
fs.removeSync(settings.file());
app.relaunch();
app.exit();

View File

@@ -204,7 +204,7 @@ class Youtubedr {
this.getYtVideoId(url);
return true;
} catch (error) {
logger.warn(error);
console.error(error);
return false;
}
};

View File

@@ -8,9 +8,6 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
reset: () => {
ipcRenderer.invoke("app-reset");
},
resetSettings: () => {
ipcRenderer.invoke("app-reset-settings");
},
relaunch: () => {
ipcRenderer.invoke("app-relaunch");
},
@@ -146,15 +143,6 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
getFfmpegConfig: () => {
return ipcRenderer.invoke("settings-get-ffmpeg-config");
},
setFfmpegConfig: (config: FfmpegConfigType) => {
return ipcRenderer.invoke("settings-set-ffmpeg-config", config);
},
getLanguage: (language: string) => {
return ipcRenderer.invoke("settings-get-language", language);
},
switchLanguage: (language: string) => {
return ipcRenderer.invoke("settings-switch-language", language);
},
},
path: {
join: (...paths: string[]) => {
@@ -187,8 +175,8 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
findOne: (params: object) => {
return ipcRenderer.invoke("audios-find-one", params);
},
create: (uri: string, params?: object) => {
return ipcRenderer.invoke("audios-create", uri, params);
create: (source: string, params?: object) => {
return ipcRenderer.invoke("audios-create", source, params);
},
update: (id: string, params: object) => {
return ipcRenderer.invoke("audios-update", id, params);
@@ -213,8 +201,8 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
findOne: (params: object) => {
return ipcRenderer.invoke("videos-find-one", params);
},
create: (uri: string, params?: object) => {
return ipcRenderer.invoke("videos-create", uri, params);
create: (source: string, params?: object) => {
return ipcRenderer.invoke("videos-create", source, params);
},
update: (id: string, params: object) => {
return ipcRenderer.invoke("videos-update", id, params);
@@ -350,12 +338,6 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
download: () => {
return ipcRenderer.invoke("ffmpeg-download");
},
discover: () => {
return ipcRenderer.invoke("ffmpeg-discover-command");
},
check: () => {
return ipcRenderer.invoke("ffmpeg-check-command");
},
},
download: {
onState: (
@@ -374,6 +356,50 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
ipcRenderer.removeAllListeners("download-on-error");
},
},
webApi: {
auth: (params: object) => {
return ipcRenderer.invoke("web-api-auth", params);
},
me: () => {
return ipcRenderer.invoke("web-api-me");
},
lookup: (params: object) => {
return ipcRenderer.invoke("web-api-lookup", params);
},
lookupInBatch: (params: object[]) => {
return ipcRenderer.invoke("web-api-lookup-in-batch", params);
},
createStory: (params: object) => {
return ipcRenderer.invoke("web-api-create-story", params);
},
starStory: (storyId: string) => {
return ipcRenderer.invoke("web-api-star-story", storyId);
},
unstarStory: (storyId: string) => {
return ipcRenderer.invoke("web-api-unstar-story", storyId);
},
extractVocabularyFromStory: (storyId: string) => {
return ipcRenderer.invoke(
"web-api-extract-vocabulary-from-story",
storyId
);
},
storyMeanings: (storyId: string, params: object) => {
return ipcRenderer.invoke("web-api-story-meanings", storyId, params);
},
story: (id: string) => {
return ipcRenderer.invoke("web-api-story", id);
},
stories: (params: object) => {
return ipcRenderer.invoke("web-api-stories", params);
},
mineStories: (params: object) => {
return ipcRenderer.invoke("web-api-mine-stories", params);
},
mineMeanings: (params: object) => {
return ipcRenderer.invoke("web-api-mine-meanings", params);
},
},
cacheObjects: {
get: (key: string) => {
return ipcRenderer.invoke("cache-objects-get", key);
@@ -399,12 +425,4 @@ contextBridge.exposeInMainWorld("__ENJOY_APP__", {
return ipcRenderer.invoke("transcriptions-update", id, params);
},
},
waveforms: {
find: (id: string) => {
return ipcRenderer.invoke("waveforms-find", id);
},
save: (id: string, data: WaveFormDataType) => {
return ipcRenderer.invoke("waveforms-save", id, data);
},
}
});

View File

@@ -6,29 +6,19 @@ import {
} from "@renderer/context";
import router from "./router";
import { RouterProvider } from "react-router-dom";
import { Toaster, toast } from "@renderer/components/ui";
import { Toaster, useToast } from "@renderer/components/ui";
import { t } from "i18next";
import { Tooltip } from "react-tooltip";
import { useHotkeys } from "react-hotkeys-hook";
function App() {
const { toast } = useToast();
window.__ENJOY_APP__.onNotification((_event, notification) => {
switch (notification.type) {
case "success":
toast.success(notification.message);
break;
case "error":
toast.error(notification.message);
break;
case "info":
toast.info(notification.message);
break;
case "warning":
toast.warning(notification.message);
break;
default:
toast.message(notification.message);
break;
}
toast({
title: t(notification.type),
description: notification.message,
variant: notification.type === "error" ? "destructive" : "default",
});
});
const ControlOrCommand = navigator.platform.includes("Mac")
@@ -53,7 +43,7 @@ function App() {
<AISettingsProvider>
<DbProvider>
<RouterProvider router={router} />
<Toaster richColors closeButton position="top-center" />
<Toaster />
<Tooltip id="global-tooltip" />
</DbProvider>
</AISettingsProvider>

View File

@@ -2,7 +2,7 @@ import { Link } from "react-router-dom";
import { cn } from "@renderer/lib/utils";
export const AudioCard = (props: {
audio: Partial<AudioType>;
audio: AudioType;
className?: string;
}) => {
const { audio, className } = props;

View File

@@ -11,29 +11,16 @@ import {
MediaTranscription,
} from "@renderer/components";
import { LoaderIcon } from "lucide-react";
import {
AlertDialog,
AlertDialogHeader,
AlertDialogDescription,
AlertDialogTitle,
AlertDialogContent,
AlertDialogFooter,
AlertDialogCancel,
Button,
ScrollArea,
toast,
} from "@renderer/components/ui";
import { t } from "i18next";
import { ScrollArea } from "@renderer/components/ui";
export const AudioDetail = (props: { id?: string; md5?: string }) => {
const { id, md5 } = props;
const { addDblistener, removeDbListener } = useContext(DbProviderContext);
const { EnjoyApp, webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [audio, setAudio] = useState<AudioType | null>(null);
const [transcription, setTranscription] = useState<TranscriptionType>(null);
const [initialized, setInitialized] = useState<boolean>(false);
const [sharing, setSharing] = useState<boolean>(false);
// Player controls
const [currentTime, setCurrentTime] = useState<number>(0);
@@ -56,35 +43,6 @@ export const AudioDetail = (props: { id?: string; md5?: string }) => {
}
};
const handleShare = async () => {
if (!audio.source && !audio.isUploaded) {
try {
await EnjoyApp.audios.upload(audio.id);
} catch (err) {
toast.error(t("shareFailed"), {
description: err.message,
});
return;
}
}
webApi
.createPost({
targetType: "Audio",
targetId: audio.id,
})
.then(() => {
toast.success(t("sharedSuccessfully"), {
description: t("sharedAudio"),
});
})
.catch((err) => {
toast.error(t("shareFailed"), {
description: err.message,
});
});
setSharing(false);
};
useEffect(() => {
const where = id ? { id } : { md5 };
EnjoyApp.audios.findOne(where).then((audio) => {
@@ -132,7 +90,7 @@ export const AudioDetail = (props: { id?: string; md5?: string }) => {
mediaId={audio.id}
mediaType="Audio"
mediaUrl={audio.src}
mediaMd5={audio.md5}
waveformCacheKey={`waveform-audio-${audio.md5}`}
transcription={transcription}
currentTime={currentTime}
setCurrentTime={setCurrentTime}
@@ -152,7 +110,6 @@ export const AudioDetail = (props: { id?: string; md5?: string }) => {
setPlaybackRate={setPlaybackRate}
displayInlineCaption={displayInlineCaption}
setDisplayInlineCaption={setDisplayInlineCaption}
onShare={() => setSharing(true)}
/>
<ScrollArea className={`flex-1 relative bg-muted`}>
@@ -189,25 +146,8 @@ export const AudioDetail = (props: { id?: string; md5?: string }) => {
</div>
</div>
<AlertDialog open={sharing} onOpenChange={(value) => setSharing(value)}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("shareAudio")}</AlertDialogTitle>
<AlertDialogDescription>
{t("areYouSureToShareThisAudioToCommunity")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<Button variant="default" onClick={handleShare}>
{t("share")}
</Button>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
{!initialized && (
<div className="top-0 w-full h-full absolute z-30 bg-background/10 flex items-center justify-center">
<div className="top-0 w-full h-full absolute z-30 bg-white/10 flex items-center justify-center">
<LoaderIcon className="text-muted-foreground animate-spin w-8 h-8" />
</div>
)}

View File

@@ -4,11 +4,9 @@ import {
AddMediaButton,
AudiosTable,
AudioEditForm,
LoaderSpin,
} from "@renderer/components";
import { t } from "i18next";
import {
Button,
Tabs,
TabsContent,
TabsList,
@@ -25,7 +23,6 @@ import {
DialogContent,
DialogHeader,
DialogTitle,
toast,
} from "@renderer/components/ui";
import {
DbProviderContext,
@@ -46,51 +43,28 @@ export const AudiosComponent = () => {
const { addDblistener, removeDbListener } = useContext(DbProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [offset, setOffest] = useState(0);
const [loading, setLoading] = useState(false);
const navigate = useNavigate();
useEffect(() => {
fetchResources();
}, []);
useEffect(() => {
addDblistener(onAudiosUpdate);
fetchAudios();
fetchResources();
return () => {
removeDbListener(onAudiosUpdate);
};
}, []);
const fetchAudios = async () => {
if (loading) return;
if (offset === -1) return;
const fetchResources = async () => {
const audios = await EnjoyApp.audios.findAll({
limit: 10,
});
if (!audios) return;
setLoading(true);
const limit = 10;
EnjoyApp.audios
.findAll({
offset,
limit,
})
.then((_audios) => {
if (_audios.length === 0) {
setOffest(-1);
return;
}
if (_audios.length < limit) {
setOffest(-1);
} else {
setOffest(offset + _audios.length);
}
dispatchAudios({ type: "append", records: _audios });
})
.catch((err) => {
toast.error(err.message);
})
.finally(() => {
setLoading(false);
});
dispatchAudios({ type: "set", records: audios });
};
const onAudiosUpdate = (event: CustomEvent) => {
@@ -105,7 +79,7 @@ export const AudiosComponent = () => {
dispatchAudios({ type: "destroy", record });
}
} else if (model === "Video" && action === "create") {
navigate(`/videos/${record.id}`);
navigate(`/videos/${record.id}`);
} else if (model === "Transcription" && action === "update") {
dispatchAudios({
type: "update",
@@ -119,8 +93,6 @@ export const AudiosComponent = () => {
};
if (audios.length === 0) {
if (loading) return <LoaderSpin />;
return (
<div className="flex items-center justify-center h-48 border border-dashed rounded-lg">
<AddMediaButton />
@@ -163,14 +135,6 @@ export const AudiosComponent = () => {
</Tabs>
</div>
{offset > -1 && (
<div className="flex items-center justify-center my-4">
<Button variant="link" onClick={fetchAudios}>
{t("loadMore")}
</Button>
</div>
)}
<Dialog
open={!!editing}
onOpenChange={(value) => {

View File

@@ -627,6 +627,8 @@ export const LLM_PROVIDERS: { [key: string]: any } = {
openai: {
name: "OpenAI",
description: t("youNeedToSetupApiKeyBeforeUsingOpenAI"),
baseUrl:
"https://gateway.ai.cloudflare.com/v1/11d43ab275eb7e1b271ba4089ecc3864/enjoy/openai",
models: [
"gpt-3.5-turbo-1106",
"gpt-3.5-turbo",

View File

@@ -1,68 +0,0 @@
import { useContext, useEffect, useState } from "react";
import { AppSettingsProviderContext } from "@renderer/context";
import { ScrollArea, toast } from "@renderer/components/ui";
import { LoaderSpin } from "@renderer/components";
import { MessageCircleIcon } from "lucide-react";
export const ConversationsShortcut = (props: {
prompt: string;
onReply?: (reply: MessageType[]) => void;
}) => {
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const { prompt, onReply } = props;
const [conversations, setConversations] = useState<ConversationType[]>([]);
const [loading, setLoading] = useState<boolean>(false);
const ask = (conversation: ConversationType) => {
setLoading(true);
EnjoyApp.conversations
.ask(conversation.id, {
content: prompt,
})
.then((replies) => {
onReply(replies);
})
.catch((error) => {
toast.error(error.message);
})
.finally(() => {
setLoading(false);
});
};
useEffect(() => {
EnjoyApp.conversations.findAll({ limit: 10 }).then((conversations) => {
setConversations(conversations);
setLoading(false);
});
}, []);
if (loading) {
return <LoaderSpin />;
}
return (
<ScrollArea>
{conversations.map((conversation) => {
return (
<div
key={conversation.id}
onClick={() => ask(conversation)}
className="bg-background text-primary rounded-full w-full mb-2 py-2 px-4 hover:bg-primary hover:text-white cursor-pointer flex items-center border"
style={{
borderLeftColor: `#${conversation.id
.replaceAll("-", "")
.substr(0, 6)}`,
borderLeftWidth: 3,
}}
>
<div className="">
<MessageCircleIcon className="mr-2" />
</div>
<div className="flex-1 truncated">{conversation.name}</div>
</div>
);
})}
</ScrollArea>
);
};

View File

@@ -1,5 +1,4 @@
export * from "./conversation-form";
export * from "./conversations-shortcut";
export * from './conversation-form';
export * from "./speech-form";
export * from './speech-form';
export * from "./speech-player";

View File

@@ -1,7 +1,7 @@
import { useEffect, useState, useRef, useCallback } from "react";
import { PitchContour } from "@renderer/components";
import WaveSurfer from "wavesurfer.js";
import { Button, Skeleton } from "@renderer/components/ui";
import { Button } from "@renderer/components/ui";
import { PlayIcon, PauseIcon } from "lucide-react";
import { useIntersectionObserver } from "@uidotdev/usehooks";
import { secondsToTimestamp } from "@renderer/lib/utils";
@@ -18,7 +18,6 @@ export const SpeechPlayer = (props: {
threshold: 1,
});
const [duration, setDuration] = useState<number>(0);
const [initialized, setInitialized] = useState(false);
const onPlayClick = useCallback(() => {
wavesurfer.isPlaying() ? wavesurfer.pause() : wavesurfer.play();
@@ -70,7 +69,6 @@ export const SpeechPlayer = (props: {
height,
})
);
setInitialized(true);
}),
];
@@ -89,17 +87,9 @@ export const SpeechPlayer = (props: {
</div>
<div
ref={ref}
className="bg-background rounded-lg grid grid-cols-9 items-center relative pl-2 h-[100px]"
className="bg-white rounded-lg grid grid-cols-9 items-center relative pl-2 h-[100px]"
>
{!initialized && (
<div className="col-span-9 flex flex-col justify-around h-[80px]">
<Skeleton className="h-3 w-full rounded-full" />
<Skeleton className="h-3 w-full rounded-full" />
<Skeleton className="h-3 w-full rounded-full" />
</div>
)}
<div className={`flex justify-center ${initialized ? "" : "hidden"}`}>
<div className="flex justify-center">
<Button
onClick={onPlayClick}
className="aspect-square rounded-full p-2 w-12 h-12 bg-blue-600 hover:bg-blue-500"
@@ -112,10 +102,7 @@ export const SpeechPlayer = (props: {
</Button>
</div>
<div
className={`col-span-8 ${initialized ? "" : "hidden"}`}
ref={containerRef}
></div>
<div className="col-span-8" ref={containerRef}></div>
</div>
</div>
);

View File

@@ -1,19 +1,13 @@
import { t } from "i18next";
import { useContext, useEffect, useState } from "react";
import { Button, Progress, toast } from "@renderer/components/ui";
import { Button, Progress } from "@renderer/components/ui";
import { AppSettingsProviderContext } from "@renderer/context";
import { CheckCircle2Icon, XCircleIcon, LoaderIcon } from "lucide-react";
import Markdown from "react-markdown";
export const FfmpegCheck = () => {
const { ffmpegConfig, setFfmegConfig, EnjoyApp } = useContext(
AppSettingsProviderContext
);
const [scanResult, setScanResult] = useState<{
ffmpegPath: string;
ffprobePath: string;
scanDirs: string[];
}>();
const [downloading, setDownloading] = useState(false);
const [progress, setProgress] = useState(0);
@@ -23,25 +17,15 @@ export const FfmpegCheck = () => {
});
};
const discoverFfmpeg = () => {
EnjoyApp.ffmpeg.discover().then((config) => {
setScanResult(config);
if (config.ffmpegPath && config.ffprobePath) {
toast.success(t("ffmpegFound"));
refreshFfmpegConfig();
} else {
toast.error(t("ffmpegNotFound"));
}
});
};
const downloadFfmpeg = () => {
listenToDownloadState();
setDownloading(true);
EnjoyApp.ffmpeg
.download()
.then(() => {
refreshFfmpegConfig();
.then((config) => {
if (config) {
setFfmegConfig(config);
}
})
.finally(() => {
setDownloading(false);
@@ -60,11 +44,11 @@ export const FfmpegCheck = () => {
}, [ffmpegConfig?.ready]);
useEffect(() => {
discoverFfmpeg();
refreshFfmpegConfig();
}, []);
return (
<div className="w-full max-w-screen-md mx-auto px-6">
<div className="w-full max-w-sm px-6">
{ffmpegConfig?.ready ? (
<>
<div className="flex justify-center items-center mb-8">
@@ -74,7 +58,7 @@ export const FfmpegCheck = () => {
<CheckCircle2Icon className="text-green-500 w-10 h-10 mb-4" />
</div>
<div className="text-center text-sm opacity-70">
{t("ffmpegFoundAt", { path: ffmpegConfig.ffmpegPath })}
{t("ffmpegInstalled")}
</div>
</>
) : (
@@ -85,87 +69,24 @@ export const FfmpegCheck = () => {
<div className="flex justify-center mb-4">
<XCircleIcon className="text-red-500 w-10 h-10" />
</div>
<div className="mb-4">
<div className="text-center text-sm mb-2">
{t("ffmpegNotFound")}
</div>
{scanResult && (
<div className="text-center text-xs text-muted-foreground mb-2">
{t("tryingToFindValidFFmepgInTheseDirectories", {
dirs: scanResult.scanDirs.join(", "),
})}
</div>
)}
<div className="text-center text-sm opacity-70 mb-4">
{t("ffmpegNotInstalled")}
</div>
<div className="flex items-center justify-center space-x-4 mb-4">
<Button onClick={discoverFfmpeg} variant="default">
{t("scan")}
<div className="flex items-center justify-center mb-4">
<Button
disabled={downloading}
className=""
onClick={downloadFfmpeg}
>
{downloading && <LoaderIcon className="animate-spin mr-2" />}
{t("downloadFfmpeg")}
</Button>
{ffmpegConfig.os === "win32" && (
<Button
variant="secondary"
disabled={downloading}
onClick={downloadFfmpeg}
>
{downloading && <LoaderIcon className="animate-spin mr-2" />}
{t("download")}
</Button>
)}
</div>
{downloading && (
<div className="w-full">
<Progress value={progress} />
</div>
)}
{ffmpegConfig.os === "darwin" && (
<div className="my-6 select-text prose mx-auto border rounded-lg p-4">
<h3 className="text-center">{t("ffmpegInstallSteps")}</h3>
<h4>
1. {t("install")}{" "}
<a
className="cursor-pointer text-blue-500 hover:underline"
onClick={() => {
EnjoyApp.shell.openExternal("https://brew.sh/");
}}
>
Homebrew
</a>
</h4>
<p>{t("runTheFollowingCommandInTerminal")} </p>
<pre>
<code>
/bin/bash -c "$(curl -fsSL
https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
</code>
</pre>
<h4>2. {t("install")} FFmpeg</h4>
<p>{t("runTheFollowingCommandInTerminal")} </p>
<pre>
<code>brew install ffmpeg</code>
</pre>
<h4>3. {t("scan")} FFmpeg</h4>
<p>
{t("click")}
<Button
onClick={discoverFfmpeg}
variant="default"
size="sm"
className="mx-2"
>
{t("scan")}
</Button>
, {t("willAutomaticallyFindFFmpeg")}
</p>
</div>
)}
</>
)}
</div>

View File

@@ -10,9 +10,6 @@ export * from "./videos";
export * from "./medias";
export * from "./posts";
export * from "./users";
export * from "./db-state";
export * from "./layout";

View File

@@ -1,16 +1,16 @@
import { Button, toast, Separator } from "@renderer/components/ui";
import { useContext, useEffect } from "react";
import { Button, useToast } from "@renderer/components/ui";
import { useContext, useState, useEffect } from "react";
import { WEB_API_URL } from "@/constants";
import { AppSettingsProviderContext } from "@renderer/context";
import { t } from "i18next";
import { UserSettings, LanguageSettings } from "@renderer/components";
export const LoginForm = () => {
const { EnjoyApp, login, webApi, user } = useContext(
AppSettingsProviderContext
);
const { toast } = useToast();
const { EnjoyApp, login } = useContext(AppSettingsProviderContext);
const [endpoint, setEndpoint] = useState(WEB_API_URL);
const handleMixinLogin = () => {
const url = `${webApi.baseUrl}/sessions/new?provider=mixin`;
const url = `${endpoint}/sessions/new?provider=mixin`;
EnjoyApp.view.load(url, { x: 0, y: 0 });
};
@@ -23,7 +23,11 @@ export const LoginForm = () => {
const { state, url, error } = event;
if (error) {
toast.error(error);
toast({
title: t("error"),
description: error,
variant: "destructive",
});
EnjoyApp.view.hide();
return;
}
@@ -32,13 +36,17 @@ export const LoginForm = () => {
const provider = new URL(url).pathname.split("/")[2];
const code = new URL(url).searchParams.get("code");
if (!url.startsWith(webApi.baseUrl)) {
toast.error(t("invalidRedirectUrl"));
if (!url.startsWith(endpoint)) {
toast({
title: t("error"),
description: t("invalidRedirectUrl"),
variant: "destructive",
});
EnjoyApp.view.hide();
}
if (provider && code) {
webApi
EnjoyApp.webApi
.auth({ provider, code })
.then((user) => {
login(user);
@@ -47,12 +55,22 @@ export const LoginForm = () => {
EnjoyApp.view.hide();
});
} else {
toast.error(t("failedToLogin"));
toast({
title: t("error"),
description: t("failedToLogin"),
variant: "destructive",
});
EnjoyApp.view.hide();
}
}
};
useEffect(() => {
EnjoyApp.app.apiUrl().then((url) => {
setEndpoint(url);
});
}, []);
useEffect(() => {
EnjoyApp.view.onViewState((_event, state) => onViewState(state));
@@ -60,17 +78,7 @@ export const LoginForm = () => {
EnjoyApp.view.removeViewStateListeners();
EnjoyApp.view.remove();
};
}, [webApi]);
if (user) {
return (
<div className="px-4 py-2 border rounded-lg w-full max-w-sm">
<UserSettings />
<Separator />
<LanguageSettings />
</div>
);
}
}, [endpoint]);
return (
<div className="w-full max-w-sm px-6 flex flex-col space-y-4">

View File

@@ -18,7 +18,7 @@ export const LookupResult = (props: {
const [loading, setLoading] = useState<boolean>(true);
if (!word) return null;
const { webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const lookup = (retries = 0) => {
if (!word) return;
@@ -28,7 +28,7 @@ export const LookupResult = (props: {
}
retries += 1;
webApi
EnjoyApp.webApi
.lookup({
word,
context,

View File

@@ -16,7 +16,6 @@ import {
MinimizeIcon,
GalleryHorizontalIcon,
SpellCheckIcon,
Share2Icon,
} from "lucide-react";
import { t } from "i18next";
import { type WaveSurferOptions } from "wavesurfer.js";
@@ -25,6 +24,7 @@ import { Tooltip } from "react-tooltip";
const PLAYBACK_RATE_OPTIONS = [0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75];
const MIN_ZOOM_RATIO = 0.25;
const MAX_ZOOM_RATIO = 5.0;
const ZOOM_RATIO_STEP = 0.25;
export const MediaPlayerControls = (props: {
isPlaying: boolean;
@@ -47,7 +47,6 @@ export const MediaPlayerControls = (props: {
setWavesurferOptions?: (options: Partial<WaveSurferOptions>) => void;
displayInlineCaption?: boolean;
setDisplayInlineCaption?: (display: boolean) => void;
onShare?: () => void;
}) => {
const {
isPlaying,
@@ -68,7 +67,6 @@ export const MediaPlayerControls = (props: {
setWavesurferOptions,
displayInlineCaption,
setDisplayInlineCaption,
onShare,
} = props;
return (
@@ -246,32 +244,20 @@ export const MediaPlayerControls = (props: {
</Button>
)}
<Button
variant="ghost"
data-tooltip-id="media-player-controls-tooltip"
data-tooltip-content={t("share")}
className="relative aspect-square p-0 h-10"
onClick={onShare}
>
<Share2Icon className="w-6 h-6" />
</Button>
<div className="absolute right-4">
<div className="flex items-center space-x-4">
{transcriptionDirty && (
<>
<Button
variant="secondary"
className=""
onClick={resetTranscription}
>
{t("reset")}
</Button>
<Button onClick={saveTranscription}>{t("save")}</Button>
</>
)}
{transcriptionDirty && (
<div className="absolute right-4">
<div className="flex items-center space-x-4">
<Button
variant="secondary"
className=""
onClick={resetTranscription}
>
{t("reset")}
</Button>
<Button onClick={saveTranscription}>{t("save")}</Button>
</div>
</div>
</div>
)}
<Tooltip id="media-player-controls-tooltip" />
</div>

View File

@@ -34,7 +34,7 @@ export const MediaPlayer = (props: {
mediaId: string;
mediaType: "Audio" | "Video";
mediaUrl: string;
mediaMd5?: string;
waveformCacheKey: string;
transcription: TranscriptionType;
// player controls
currentTime: number;
@@ -60,14 +60,13 @@ export const MediaPlayer = (props: {
setPlaybackRate: (value: number) => void;
displayInlineCaption?: boolean;
setDisplayInlineCaption?: (value: boolean) => void;
onShare?: () => void;
}) => {
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const {
mediaId,
mediaType,
mediaUrl,
mediaMd5,
waveformCacheKey,
transcription,
height = 200,
currentTime,
@@ -89,12 +88,16 @@ export const MediaPlayer = (props: {
setPlaybackRate,
displayInlineCaption,
setDisplayInlineCaption,
onShare,
} = props;
if (!mediaUrl) return;
const [wavesurfer, setWavesurfer] = useState(null);
const [waveform, setWaveForm] = useState<WaveFormDataType>(null);
const [waveform, setWaveForm] = useState<{
peaks: number[];
duration: number;
frequencies: number[];
sampleRate: number;
}>(null);
const containerRef = useRef<HTMLDivElement>();
const [mediaProvider, setMediaProvider] = useState<
HTMLAudioElement | HTMLVideoElement
@@ -176,7 +179,7 @@ export const MediaPlayer = (props: {
const renderPitchContour = (region: RegionType) => {
if (!region) return;
if (!waveform?.frequencies?.length) return;
if (!waveform.frequencies.length) return;
if (!wavesurfer) return;
const duration = wavesurfer.getDuration();
@@ -275,6 +278,7 @@ export const MediaPlayer = (props: {
const ws = WaveSurfer.create({
container: containerRef.current,
height,
url: mediaUrl,
waveColor: "#ddd",
progressColor: "rgba(0, 0, 0, 0.25)",
cursorColor: "#dc143c",
@@ -318,7 +322,6 @@ export const MediaPlayer = (props: {
const subscriptions = [
wavesurfer.on("play", () => setIsPlaying(true)),
wavesurfer.on("pause", () => setIsPlaying(false)),
wavesurfer.on("loading", (percent: number) => console.log(percent)),
wavesurfer.on("timeupdate", (time: number) => setCurrentTime(time)),
wavesurfer.on("decode", () => {
if (waveform?.frequencies) return;
@@ -335,7 +338,7 @@ export const MediaPlayer = (props: {
sampleRate,
frequencies: _frequencies,
};
EnjoyApp.waveforms.save(mediaMd5, _waveform);
EnjoyApp.cacheObjects.set(waveformCacheKey, _waveform);
setWaveForm(_waveform);
}),
wavesurfer.on("ready", () => {
@@ -474,8 +477,10 @@ export const MediaPlayer = (props: {
}, [wavesurfer, isPlaying]);
useEffect(() => {
EnjoyApp.waveforms.find(mediaMd5).then((waveform) => {
setWaveForm(waveform);
EnjoyApp.cacheObjects.get(waveformCacheKey).then((cached) => {
if (!cached) return;
setWaveForm(cached);
});
}, []);
@@ -531,7 +536,6 @@ export const MediaPlayer = (props: {
setWavesurferOptions={(options) => wavesurfer?.setOptions(options)}
displayInlineCaption={displayInlineCaption}
setDisplayInlineCaption={setDisplayInlineCaption}
onShare={onShare}
/>
</div>

View File

@@ -90,18 +90,18 @@ export const AssistantMessageComponent = (props: {
id={`message-${message.id}`}
className="flex items-end space-x-2 pr-10"
>
<Avatar className="w-8 h-8 bg-background avatar">
<Avatar className="w-8 h-8 bg-white avatar">
<AvatarImage></AvatarImage>
<AvatarFallback className="bg-background">AI</AvatarFallback>
<AvatarFallback className="bg-white">AI</AvatarFallback>
</Avatar>
<div className="flex flex-col gap-2 px-4 py-2 bg-background border rounded-lg shadow-sm w-full">
<div className="flex flex-col gap-2 px-4 py-2 bg-white border rounded-lg shadow-sm w-full prose max-w-prose">
{configuration?.autoSpeech && speeching ? (
<div className="p-4">
<LoaderIcon className="w-8 h-8 animate-spin" />
</div>
) : (
<Markdown
className="select-text prose"
className="select-text"
components={{
a({ node, children, ...props }) {
try {

View File

@@ -1,23 +1,12 @@
import {
AlertDialog,
AlertDialogTrigger,
AlertDialogHeader,
AlertDialogDescription,
AlertDialogTitle,
AlertDialogContent,
AlertDialogFooter,
AlertDialogCancel,
AlertDialogAction,
Avatar,
AvatarImage,
AvatarFallback,
Button,
DropdownMenu,
DropdownMenuTrigger,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuSeparator,
toast,
} from "@renderer/components/ui";
import { SpeechPlayer } from "@renderer/components";
import { useContext, useState } from "react";
@@ -28,11 +17,9 @@ import {
AlertCircleIcon,
CopyIcon,
CheckIcon,
Share2Icon,
} from "lucide-react";
import { useCopyToClipboard } from "@uidotdev/usehooks";
import { t } from "i18next";
import { useNavigate } from "react-router-dom";
import Markdown from "react-markdown";
export const UserMessageComponent = (props: {
@@ -43,40 +30,9 @@ export const UserMessageComponent = (props: {
}) => {
const { message, onResend, onRemove } = props;
const speech = message.speeches?.[0];
const { user, webApi } = useContext(AppSettingsProviderContext);
const { user } = useContext(AppSettingsProviderContext);
const [_, copyToClipboard] = useCopyToClipboard();
const [copied, setCopied] = useState<boolean>(false);
const navigate = useNavigate();
const handleShare = async () => {
if (message.role === "user") {
const content = message.content;
webApi
.createPost({
metadata: {
type: "prompt",
content,
},
})
.then(() => {
toast.success(t("sharedSuccessfully"), {
description: t("sharedPrompt"),
action: {
label: t("view"),
onClick: () => {
navigate("/community");
},
},
actionButtonStyle: {
backgroundColor: "var(--primary)",
},
});
})
.catch((err) => {
toast.error(t("shareFailed"), { description: err.message });
});
}
};
return (
<div
@@ -85,7 +41,7 @@ export const UserMessageComponent = (props: {
>
<DropdownMenu>
<div className="flex flex-col gap-2 px-4 py-2 bg-sky-500/30 border-sky-500 rounded-lg shadow-sm w-full">
<Markdown className="select-text prose">{message.content}</Markdown>
<Markdown className="select-text">{message.content}</Markdown>
{Boolean(speech) && <SpeechPlayer speech={speech} />}
@@ -125,34 +81,6 @@ export const UserMessageComponent = (props: {
}}
/>
)}
{message.createdAt && (
<AlertDialog>
<AlertDialogTrigger asChild>
<Share2Icon
data-tooltip-id="global-tooltip"
data-tooltip-content={t("share")}
className="w-3 h-3 cursor-pointer"
/>
</AlertDialogTrigger>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("sharePrompt")}</AlertDialogTitle>
<AlertDialogDescription>
{t("areYouSureToShareThisPromptToCommunity")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<AlertDialogAction asChild>
<Button variant="default" onClick={handleShare}>
{t("share")}
</Button>
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
)}
</div>
</div>
<DropdownMenuContent>
@@ -168,7 +96,7 @@ export const UserMessageComponent = (props: {
</DropdownMenuContent>
</DropdownMenu>
<Avatar className="w-8 h-8 bg-background">
<Avatar className="w-8 h-8 bg-white">
<AvatarImage src={user.avatarUrl} />
<AvatarFallback className="bg-primary text-white capitalize">
{user.name[0]}

View File

@@ -1,9 +0,0 @@
export * from "./posts";
export * from "./post-audio";
export * from "./post-card";
export * from "./post-medium";
export * from "./post-recording";
export * from "./post-story";
export * from "./post-options";
export * from "./post-actions";

View File

@@ -1,206 +0,0 @@
import { useContext, useState } from "react";
import { AppSettingsProviderContext } from "@renderer/context";
import { ConversationsShortcut } from "@renderer/components";
import {
AlertDialog,
AlertDialogTrigger,
AlertDialogContent,
AlertDialogDescription,
AlertDialogHeader,
AlertDialogTitle,
AlertDialogAction,
AlertDialogCancel,
AlertDialogFooter,
Button,
Dialog,
DialogTrigger,
DialogContent,
DialogHeader,
DialogTitle,
ScrollArea,
toast,
} from "@renderer/components/ui";
import { t } from "i18next";
import Markdown from "react-markdown";
import {
BotIcon,
CheckIcon,
CopyPlusIcon,
PlusCircleIcon,
ChevronRightIcon,
} from "lucide-react";
import { useCopyToClipboard } from "@uidotdev/usehooks";
import { Link } from "react-router-dom";
export const PostActions = (props: { post: PostType }) => {
const { post } = props;
const [_, copyToClipboard] = useCopyToClipboard();
const [copied, setCopied] = useState<boolean>(false);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [asking, setAsking] = useState<boolean>(false);
const [aiReplies, setAiReplies] = useState<MessageType[]>([]);
const handleAddMedium = async () => {
if (post.targetType !== "Medium") return;
const medium = post.target as MediumType;
if (!medium) return;
if (medium.mediumType === "Video") {
try {
const video = await EnjoyApp.videos.findOne({ md5: medium.md5 });
if (video) {
toast.info(t("videoAlreadyAddedToLibrary"));
return;
}
} catch (error) {
console.error(error);
}
EnjoyApp.videos
.create(medium.sourceUrl, {
coverUrl: medium.coverUrl,
md5: medium.md5,
})
.then(() => {
toast.success(t("videoSuccessfullyAddedToLibrary"));
});
} else if (medium.mediumType === "Audio") {
try {
const audio = await EnjoyApp.audios.findOne({ md5: medium.md5 });
if (audio) {
toast.info(t("audioAlreadyAddedToLibrary"));
return;
}
} catch (error) {
toast.error(error.message);
}
EnjoyApp.audios
.create(medium.sourceUrl, {
coverUrl: medium.coverUrl,
md5: medium.md5,
})
.then(() => {
toast.success(t("audioSuccessfullyAddedToLibrary"));
});
}
};
return (
<>
<div className="flex items-center space-x-2 justify-end">
{post.target && post.targetType === "Medium" && (
<AlertDialog>
<AlertDialogTrigger asChild>
<Button
data-tooltip-id="global-tooltip"
data-tooltip-content={t("addToLibary")}
data-tooltip-place="bottom"
variant="ghost"
size="sm"
className="px-1.5 rounded-full"
>
<PlusCircleIcon className="w-5 h-5 text-muted-foreground hover:text-primary" />
</Button>
</AlertDialogTrigger>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("addRecourse")}</AlertDialogTitle>
<AlertDialogDescription>
{(post.target as MediumType).mediumType === "Video" &&
t("areYouSureToAddThisVideoToYourLibrary")}
{(post.target as MediumType).mediumType === "Audio" &&
t("areYouSureToAddThisAudioToYourLibrary")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<AlertDialogAction onClick={handleAddMedium}>
{t("confirm")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
)}
{typeof post.metadata?.content === "string" && (
<Button
data-tooltip-id="global-tooltip"
data-tooltip-content={t("copy")}
data-tooltip-place="bottom"
variant="ghost"
size="sm"
className="px-1.5 rounded-full"
>
{copied ? (
<CheckIcon className="w-5 h-5 text-green-500" />
) : (
<CopyPlusIcon
className="w-5 h-5 text-muted-foreground hover:text-primary"
onClick={() => {
copyToClipboard(post.metadata.content as string);
setCopied(true);
setTimeout(() => {
setCopied(false);
}, 3000);
}}
/>
)}
</Button>
)}
{post.metadata?.type === "prompt" && (
<Dialog open={asking} onOpenChange={setAsking}>
<DialogTrigger asChild>
<Button
data-tooltip-id="global-tooltip"
data-tooltip-content={t("sendToAIAssistant")}
data-tooltip-place="bottom"
variant="ghost"
size="sm"
className="px-1.5 rounded-full"
>
<BotIcon className="w-5 h-5 text-muted-foreground hover:text-primary" />
</Button>
</DialogTrigger>
<DialogContent>
<DialogHeader>
<DialogTitle>{t("sendToAIAssistant")}</DialogTitle>
</DialogHeader>
<ConversationsShortcut
prompt={post.metadata.content as string}
onReply={(replies) => {
setAiReplies([...aiReplies, ...replies]);
setAsking(false);
}}
/>
</DialogContent>
<ScrollArea></ScrollArea>
</Dialog>
)}
</div>
{aiReplies.length > 0 && <AIReplies replies={aiReplies} />}
</>
);
};
const AIReplies = (props: { replies: MessageType[] }) => {
return (
<div>
<div className="space-y-2">
{props.replies.map((reply) => (
<div key={reply.id} className="bg-muted py-2 px-4 rounded">
<div className="mb-2 flex items-center justify-between">
<BotIcon className="w-5 h-5 text-blue-500" />
<Link to={`/conversations/${reply.conversationId}`}>
<ChevronRightIcon className="w-5 h-5 text-muted-foreground" />
</Link>
</div>
<Markdown className="prose select-text">{reply.content}</Markdown>
</div>
))}
</div>
</div>
);
};

View File

@@ -1,198 +0,0 @@
import { useEffect, useState, useRef, useCallback, useContext } from "react";
import { AppSettingsProviderContext } from "@renderer/context";
import { PitchContour } from "@renderer/components";
import WaveSurfer from "wavesurfer.js";
import { Button, Skeleton } from "@renderer/components/ui";
import { PlayIcon, PauseIcon } from "lucide-react";
import { useIntersectionObserver } from "@uidotdev/usehooks";
import { secondsToTimestamp } from "@renderer/lib/utils";
import { MediaPlayer, MediaProvider } from "@vidstack/react";
import {
DefaultAudioLayout,
defaultLayoutIcons,
} from "@vidstack/react/player/layouts/default";
export const STORAGE_WORKER_ENDPOINT = "https://enjoy-storage.baizhiheizi.com";
export const PostAudio = (props: {
audio: Partial<MediumType>;
height?: number;
}) => {
const { audio, height = 80 } = props;
const [currentTime, setCurrentTime] = useState<number>(0);
const { webApi } = useContext(AppSettingsProviderContext);
const [transcription, setTranscription] = useState<TranscriptionType>();
const currentTranscription = (transcription?.result || []).find(
(s) =>
currentTime >= s.offsets.from / 1000.0 &&
currentTime <= s.offsets.to / 1000.0
);
useEffect(() => {
webApi
.transcriptions({
targetMd5: audio.md5,
})
.then((response) => {
setTranscription(response?.transcriptions?.[0]);
});
}, [audio.md5]);
return (
<div className="w-full">
{audio.sourceUrl.startsWith(STORAGE_WORKER_ENDPOINT) ? (
<WavesurferPlayer
currentTime={currentTime}
setCurrentTime={setCurrentTime}
audio={audio}
height={height}
/>
) : (
<MediaPlayer
onTimeUpdate={({ currentTime: _currentTime }) => {
setCurrentTime(_currentTime);
}}
src={audio.sourceUrl}
>
<MediaProvider />
<DefaultAudioLayout icons={defaultLayoutIcons} />
</MediaPlayer>
)}
{currentTranscription && (
<div className="mt-2 bg-muted px-4 py-2 rounded">
<div className="text-muted-foreground text-center font-serif">
{currentTranscription.text}
</div>
</div>
)}
{audio.coverUrl && (
<div className="mt-2">
<img src={audio.coverUrl} className="w-full rounded" />
</div>
)}
</div>
);
};
const WavesurferPlayer = (props: {
audio: Partial<MediumType>;
height?: number;
currentTime: number;
setCurrentTime: (currentTime: number) => void;
}) => {
const { audio, height = 80, currentTime, setCurrentTime } = props;
const [initialized, setInitialized] = useState(false);
const [isPlaying, setIsPlaying] = useState(false);
const [wavesurfer, setWavesurfer] = useState(null);
const containerRef = useRef();
const [ref, entry] = useIntersectionObserver({
threshold: 1,
});
const [duration, setDuration] = useState<number>(0);
const onPlayClick = useCallback(() => {
wavesurfer.isPlaying() ? wavesurfer.pause() : wavesurfer.play();
}, [wavesurfer]);
useEffect(() => {
// use the intersection observer to only create the wavesurfer instance
// when the player is visible
if (!entry?.isIntersecting) return;
if (!audio.sourceUrl) return;
if (wavesurfer) return;
const ws = WaveSurfer.create({
container: containerRef.current,
url: audio.sourceUrl,
height,
barWidth: 1,
cursorWidth: 0,
autoCenter: true,
autoScroll: true,
dragToSeek: true,
hideScrollbar: true,
minPxPerSec: 100,
waveColor: "#ddd",
progressColor: "rgba(0, 0, 0, 0.25)",
});
setWavesurfer(ws);
}, [audio.sourceUrl, entry]);
useEffect(() => {
if (!wavesurfer) return;
const subscriptions = [
wavesurfer.on("play", () => {
setIsPlaying(true);
}),
wavesurfer.on("pause", () => {
setIsPlaying(false);
}),
wavesurfer.on("timeupdate", (time: number) => {
setCurrentTime(time);
}),
wavesurfer.on("decode", () => {
setDuration(wavesurfer.getDuration());
const peaks = wavesurfer.getDecodedData().getChannelData(0);
const sampleRate = wavesurfer.options.sampleRate;
wavesurfer.renderer.getWrapper().appendChild(
PitchContour({
peaks,
sampleRate,
height,
})
);
setInitialized(true);
}),
];
return () => {
subscriptions.forEach((unsub) => unsub());
wavesurfer?.destroy();
};
}, [wavesurfer]);
return (
<>
<div className="flex justify-end">
<span className="text-xs text-muted-foreground">
{secondsToTimestamp(duration)}
</span>
</div>
<div
ref={ref}
className="bg-background rounded-lg grid grid-cols-9 items-center relative h-[80px]"
>
{!initialized && (
<div className="col-span-9 flex flex-col justify-around h-[80px]">
<Skeleton className="h-3 w-full rounded-full" />
<Skeleton className="h-3 w-full rounded-full" />
<Skeleton className="h-3 w-full rounded-full" />
</div>
)}
<div className={`flex justify-center ${initialized ? "" : "hidden"}`}>
<Button
onClick={onPlayClick}
className="aspect-square rounded-full p-2 w-12 h-12 bg-blue-600 hover:bg-blue-500"
>
{isPlaying ? (
<PauseIcon className="w-6 h-6 text-white" />
) : (
<PlayIcon className="w-6 h-6 text-white" />
)}
</Button>
</div>
<div
className={`col-span-8 ${initialized ? "" : "hidden"}`}
ref={containerRef}
></div>
</div>
</>
);
};

View File

@@ -1,81 +0,0 @@
import { useContext } from "react";
import { AppSettingsProviderContext } from "@renderer/context";
import {
PostRecording,
PostActions,
PostMedium,
PostStory,
PostOptions,
} from "@renderer/components";
import { Avatar, AvatarImage, AvatarFallback } from "@renderer/components/ui";
import { formatDateTime } from "@renderer/lib/utils";
import { t } from "i18next";
import Markdown from "react-markdown";
export const PostCard = (props: {
post: PostType;
handleDelete: (id: string) => void;
}) => {
const { post, handleDelete } = props;
const { user } = useContext(AppSettingsProviderContext);
return (
<div className="rounded p-4 bg-background space-y-3">
<div className="flex items-center justify-between">
<div className="flex items-center space-x-2">
<Avatar>
<AvatarImage src={post.user.avatarUrl} />
<AvatarFallback className="text-xl">
{post.user.name[0].toUpperCase()}
</AvatarFallback>
</Avatar>
<div className="flex flex-col justify-between">
<div className="">{post.user.name}</div>
<div className="text-xs text-muted-foreground">
{formatDateTime(post.createdAt)}
</div>
</div>
</div>
{post.user.id == user.id && (
<PostOptions handleDelete={() => handleDelete(post.id)} />
)}
</div>
{post.metadata?.type === "prompt" && (
<>
<div className="text-xs text-muted-foreground">
{t("sharedPrompt")}
</div>
<Markdown className="prose prose-slate prose-pre:whitespace-normal select-text">
{"```prompt\n" + post.metadata.content + "\n```"}
</Markdown>
</>
)}
{post.targetType == "Medium" && (
<PostMedium medium={post.target as MediumType} />
)}
{post.targetType == "Recording" && (
<>
<div className="text-xs text-muted-foreground">
{t("sharedRecording")}
</div>
<PostRecording recording={post.target as RecordingType} />
</>
)}
{post.targetType == "Story" && (
<>
<div className="text-xs text-muted-foreground">
{t("sharedStory")}
</div>
<PostStory story={post.target as StoryType} />
</>
)}
<PostActions post={post} />
</div>
);
};

View File

@@ -1,45 +0,0 @@
import { PostAudio } from "@renderer/components";
import { t } from "i18next";
import { MediaPlayer, MediaProvider } from "@vidstack/react";
import {
DefaultVideoLayout,
defaultLayoutIcons,
} from "@vidstack/react/player/layouts/default";
export const PostMedium = (props: { medium: MediumType }) => {
const { medium } = props;
if (!medium.sourceUrl) return null;
return (
<div className="space-y-2">
{medium.mediumType == "Video" && (
<>
<div className="text-xs text-muted-foreground">
{t("sharedAudio")}
</div>
<MediaPlayer
poster={medium.coverUrl}
src={{
type: `${medium.mediumType.toLowerCase()}/${
medium.extname.replace(".", "") || "mp4"
}`,
src: medium.sourceUrl,
}}
>
<MediaProvider />
<DefaultVideoLayout icons={defaultLayoutIcons} />
</MediaPlayer>
</>
)}
{medium.mediumType == "Audio" && (
<>
<div className="text-xs text-muted-foreground">
{t("sharedAudio")}
</div>
<PostAudio audio={medium} />
</>
)}
</div>
);
};

View File

@@ -1,63 +0,0 @@
import { useState } from "react";
import {
AlertDialog,
AlertDialogCancel,
AlertDialogHeader,
AlertDialogTitle,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
Button,
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@renderer/components/ui";
import { MoreHorizontalIcon, Trash2Icon } from "lucide-react";
import { t } from "i18next";
export const PostOptions = (props: { handleDelete: () => void }) => {
const { handleDelete } = props;
const [deleting, setDeleting] = useState(false);
return (
<>
<DropdownMenu>
<DropdownMenuTrigger>
<MoreHorizontalIcon className="w-4 h-4" />
</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuItem className="cursor-pointer" onClick={() => setDeleting(true)}>
<span className="text-sm mr-auto text-destructive capitalize">
{t("delete")}
</span>
<Trash2Icon className="w-4 h-4 text-destructive" />
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
<AlertDialog open={deleting} onOpenChange={setDeleting}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("removeSharing")}</AlertDialogTitle>
<AlertDialogDescription>
{t("areYouSureToRemoveThisSharing")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<Button
variant="destructive"
onClick={() => {
handleDelete();
setDeleting(false);
}}
>
{t("delete")}
</Button>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
</>
);
};

View File

@@ -1,133 +0,0 @@
import { useEffect, useState, useRef, useCallback } from "react";
import { PitchContour } from "@renderer/components";
import WaveSurfer from "wavesurfer.js";
import { Button, Skeleton } from "@renderer/components/ui";
import { PlayIcon, PauseIcon } from "lucide-react";
import { useIntersectionObserver } from "@uidotdev/usehooks";
import { secondsToTimestamp } from "@renderer/lib/utils";
export const PostRecording = (props: {
recording: RecordingType;
height?: number;
}) => {
const { recording, height = 80 } = props;
const [initialized, setInitialized] = useState(false);
const [isPlaying, setIsPlaying] = useState(false);
const [wavesurfer, setWavesurfer] = useState(null);
const containerRef = useRef();
const [ref, entry] = useIntersectionObserver({
threshold: 1,
});
const [duration, setDuration] = useState<number>(0);
const onPlayClick = useCallback(() => {
wavesurfer.isPlaying() ? wavesurfer.pause() : wavesurfer.play();
}, [wavesurfer]);
useEffect(() => {
// use the intersection observer to only create the wavesurfer instance
// when the player is visible
if (!entry?.isIntersecting) return;
if (!recording.src) return;
if (wavesurfer) return;
const ws = WaveSurfer.create({
container: containerRef.current,
url: recording.src,
height,
barWidth: 1,
cursorWidth: 0,
autoCenter: true,
autoScroll: true,
dragToSeek: true,
hideScrollbar: true,
minPxPerSec: 100,
waveColor: "rgba(0, 0, 0, 0.25)",
progressColor: "rgba(0, 0, 0, 0.5)",
});
setWavesurfer(ws);
}, [recording.src, entry]);
useEffect(() => {
if (!wavesurfer) return;
const subscriptions = [
wavesurfer.on("play", () => {
setIsPlaying(true);
}),
wavesurfer.on("pause", () => {
setIsPlaying(false);
}),
wavesurfer.on("decode", () => {
setDuration(wavesurfer.getDuration());
const peaks = wavesurfer.getDecodedData().getChannelData(0);
const sampleRate = wavesurfer.options.sampleRate;
wavesurfer.renderer.getWrapper().appendChild(
PitchContour({
peaks,
sampleRate,
height,
})
);
setInitialized(true);
}),
];
return () => {
subscriptions.forEach((unsub) => unsub());
wavesurfer?.destroy();
};
}, [wavesurfer]);
return (
<div className="w-full">
<div className="flex justify-end">
<span className="text-xs text-muted-foreground">
{secondsToTimestamp(duration)}
</span>
</div>
<div
ref={ref}
className="bg-sky-500/30 rounded-lg grid grid-cols-9 items-center relative h-[80px]"
>
{!initialized && (
<div className="col-span-9 flex flex-col justify-around h-[80px]">
<Skeleton className="h-2 w-full rounded-full" />
<Skeleton className="h-2 w-full rounded-full" />
<Skeleton className="h-2 w-full rounded-full" />
</div>
)}
<div className={`flex justify-center ${initialized ? "" : "hidden"}`}>
<Button
onClick={onPlayClick}
className="aspect-square rounded-full p-2 w-12 h-12 bg-blue-600 hover:bg-blue-500"
>
{isPlaying ? (
<PauseIcon className="w-6 h-6 text-white" />
) : (
<PlayIcon className="w-6 h-6 text-white" />
)}
</Button>
</div>
<div
className={`col-span-8 ${initialized ? "" : "hidden"}`}
ref={containerRef}
></div>
</div>
{
recording.referenceText && (
<div className="mt-2 bg-muted px-4 py-2 rounded">
<div className="text-muted-foreground text-center font-serif">
{recording.referenceText}
</div>
</div>
)
}
</div>
);
};

View File

@@ -1,25 +0,0 @@
import { Link } from "react-router-dom";
export const PostStory = (props: { story: StoryType }) => {
const { story } = props;
return (
<Link className="block" to={`/stories/${story.id}`}>
<div className="rounded-lg flex items-start border">
<div className="aspect-square h-36 bg-muted">
<img
src={story.metadata?.image}
className="w-full h-full object-cover"
/>
</div>
<div className="px-4 py-2">
<div className="line-clamp-2 text-lg font-semibold mb-2">
{story.metadata?.title}
</div>
<div className="line-clamp-3 text-sm text-muted-foreground">
{story.metadata?.description}
</div>
</div>
</div>
</Link>
);
};

View File

@@ -1,74 +0,0 @@
import { useContext, useEffect, useState } from "react";
import { AppSettingsProviderContext } from "@renderer/context";
import { PostCard, LoaderSpin } from "@renderer/components";
import { toast, Button } from "@renderer/components//ui";
import { t } from "i18next";
export const Posts = () => {
const { webApi } = useContext(AppSettingsProviderContext);
const [loading, setLoading] = useState<boolean>(true);
const [posts, setPosts] = useState<PostType[]>([]);
const [nextPage, setNextPage] = useState(1);
const handleDelete = (id: string) => {
webApi
.deletePost(id)
.then(() => {
toast.success(t("removeSharingSuccessfully"));
setPosts(posts.filter((post) => post.id !== id));
})
.catch((error) => {
toast.error(t("removeSharingFailed"), { description: error.message });
});
};
const fetchPosts = async (page: number = nextPage) => {
if (!page) return;
webApi
.posts({
page,
items: 10,
})
.then((res) => {
setPosts([...posts, ...res.posts]);
setNextPage(res.next);
})
.catch((err) => {
toast.error(err.message);
})
.finally(() => {
setLoading(false);
});
};
useEffect(() => {
fetchPosts();
}, []);
if (loading) {
return <LoaderSpin />;
}
return (
<div className="max-w-screen-sm mx-auto">
{posts.length === 0 && (
<div className="text-center text-gray-500">{t("noOneSharedYet")}</div>
)}
<div className="space-y-4">
{posts.map((post) => (
<PostCard key={post.id} post={post} handleDelete={handleDelete} />
))}
</div>
{nextPage && (
<div className="py-4 flex justify-center">
<Button variant="link" onClick={() => fetchPosts(nextPage)}>
{t("loadMore")}
</Button>
</div>
)}
</div>
);
};

View File

@@ -1,5 +1,5 @@
import { t } from "i18next";
import { Button, toast } from "@renderer/components/ui";
import { Button, useToast } from "@renderer/components/ui";
import { AppSettingsProviderContext } from "@renderer/context";
import { useState, useContext } from "react";
import { LoaderIcon } from "lucide-react";
@@ -7,11 +7,14 @@ import { LoaderIcon } from "lucide-react";
export const About = () => {
const { version } = useContext(AppSettingsProviderContext);
const [checking, setChecking] = useState<boolean>(false);
const { toast } = useToast();
const checkUpdate = () => {
setChecking(true);
setTimeout(() => {
setChecking(false);
toast.info(t("alreadyLatestVersion"));
toast({
description: t("alreadyLatestVersion"),
});
}, 1000);
};

View File

@@ -10,35 +10,6 @@ export const AdvancedSettings = () => {
{t("advancedSettings")}
</div>
<div className="flex items-start justify-between py-4">
<div className="">
<div className="mb-2">{t("resetSettings")}</div>
<div className="text-sm text-muted-foreground mb-2">
{t("logoutAndRemoveAllPersonalSettings")}
</div>
</div>
<div className="">
<div className="mb-2 flex justify-end">
<ResetAllButton>
<Button
variant="secondary"
className="text-destructive"
size="sm"
>
{t("resetSettings")}
</Button>
</ResetAllButton>
</div>
<div className="text-xs text-muted-foreground">
<InfoIcon className="mr-1 w-3 h-3 inline" />
<span>{t("relaunchIsNeededAfterChanged")}</span>
</div>
</div>
</div>
<Separator />
<div className="flex items-start justify-between py-4">
<div className="">
<div className="mb-2">{t("resetAll")}</div>
@@ -55,7 +26,7 @@ export const AdvancedSettings = () => {
className="text-destructive"
size="sm"
>
{t("resetAll")}
{t("reset")}
</Button>
</ResetAllButton>
</div>

View File

@@ -22,12 +22,7 @@ import {
Input,
Label,
Separator,
toast,
Select,
SelectTrigger,
SelectItem,
SelectValue,
SelectContent,
useToast,
} from "@renderer/components/ui";
import { WhisperModelOptions } from "@renderer/components";
import {
@@ -36,7 +31,7 @@ import {
} from "@renderer/context";
import { useContext, useState, useRef, useEffect } from "react";
import { redirect } from "react-router-dom";
import { InfoIcon, EditIcon } from "lucide-react";
import { InfoIcon } from "lucide-react";
export const BasicSettings = () => {
return (
@@ -44,12 +39,8 @@ export const BasicSettings = () => {
<div className="font-semibold mb-4 capitilized">{t("basicSettings")}</div>
<UserSettings />
<Separator />
<LanguageSettings />
<Separator />
<LibraryPathSettings />
<Separator />
<FfmpegSettings />
<Separator />
<WhisperSettings />
<Separator />
<OpenaiSettings />
@@ -60,7 +51,7 @@ export const BasicSettings = () => {
);
};
export const UserSettings = () => {
const UserSettings = () => {
const { user, logout } = useContext(AppSettingsProviderContext);
if (!user) return null;
@@ -113,46 +104,6 @@ export const UserSettings = () => {
);
};
export const LanguageSettings = () => {
const { language, switchLanguage } = useContext(AppSettingsProviderContext);
return (
<div className="flex items-start justify-between py-4">
<div className="">
<div className="mb-2">{t("language")}</div>
<div className="text-sm text-muted-foreground mb-2">
{language === "en" ? "English" : "简体中文"}
</div>
</div>
<div className="">
<div className="flex items-center justify-end space-x-2 mb-2">
<Select
value={language}
onValueChange={(value: "en" | "zh-CN") => {
switchLanguage(value);
}}
>
<SelectTrigger className="text-xs">
<SelectValue>
{language === "en" ? "English" : "简体中文"}
</SelectValue>
</SelectTrigger>
<SelectContent>
<SelectItem className="text-xs" value="en">
English
</SelectItem>
<SelectItem className="text-xs" value="zh-CN">
</SelectItem>
</SelectContent>
</Select>
</div>
</div>
</div>
);
};
const LibraryPathSettings = () => {
const { libraryPath, EnjoyApp } = useContext(AppSettingsProviderContext);
@@ -188,11 +139,7 @@ const LibraryPathSettings = () => {
<Button variant="secondary" size="sm" onClick={openLibraryPath}>
{t("open")}
</Button>
<Button
variant="secondary"
size="sm"
onClick={handleChooseLibraryPath}
>
<Button variant="default" size="sm" onClick={handleChooseLibraryPath}>
{t("edit")}
</Button>
</div>
@@ -205,107 +152,6 @@ const LibraryPathSettings = () => {
);
};
const FfmpegSettings = () => {
const { EnjoyApp, setFfmegConfig, ffmpegConfig } = useContext(
AppSettingsProviderContext
);
const [editing, setEditing] = useState(false);
const refreshFfmpegConfig = async () => {
EnjoyApp.settings.getFfmpegConfig().then((config) => {
setFfmegConfig(config);
});
};
const handleChooseFfmpeg = async () => {
const filePaths = await EnjoyApp.dialog.showOpenDialog({
properties: ["openFile"],
});
const path = filePaths?.[0];
if (!path) return;
if (path.includes("ffmpeg")) {
EnjoyApp.settings.setFfmpegConfig({
...ffmpegConfig,
ffmpegPath: path,
});
refreshFfmpegConfig();
} else if (path.includes("ffprobe")) {
EnjoyApp.settings.setFfmpegConfig({
...ffmpegConfig,
ffprobePath: path,
});
refreshFfmpegConfig();
} else {
toast.error(t("invalidFfmpegPath"));
}
};
return (
<>
<div className="flex items-start justify-between py-4">
<div className="">
<div className="mb-2">FFmpeg</div>
<div className="flex items-center space-x-4">
<span className=" text-sm text-muted-foreground">
<b>ffmpeg</b>: {ffmpegConfig?.ffmpegPath || ""}
</span>
{editing && (
<Button onClick={handleChooseFfmpeg} variant="ghost" size="icon">
<EditIcon className="w-4 h-4 text-muted-foreground" />
</Button>
)}
</div>
<div className="flex items-center space-x-4">
<span className=" text-sm text-muted-foreground">
<b>ffprobe</b>: {ffmpegConfig?.ffprobePath || ""}
</span>
{editing && (
<Button onClick={handleChooseFfmpeg} variant="ghost" size="icon">
<EditIcon className="w-4 h-4 text-muted-foreground" />
</Button>
)}
</div>
</div>
<div className="">
<div className="flex items-center justify-end space-x-2 mb-2">
<Button
variant="secondary"
size="sm"
onClick={() => {
EnjoyApp.ffmpeg
.discover()
.then(({ ffmpegPath, ffprobePath }) => {
if (ffmpegPath && ffprobePath) {
toast.success(
t("ffmpegFoundAt", {
path: ffmpegPath + ", " + ffprobePath,
})
);
} else {
toast.warning(t("ffmpegNotFound"));
}
refreshFfmpegConfig();
});
}}
>
{t("scan")}
</Button>
<Button
variant={editing ? "outline" : "secondary"}
size="sm"
onClick={() => setEditing(!editing)}
>
{editing ? t("cancel") : t("edit")}
</Button>
</div>
</div>
</div>
</>
);
};
const WhisperSettings = () => {
const { whisperModel, whisperModelsPath } = useContext(
AppSettingsProviderContext
@@ -320,9 +166,7 @@ const WhisperSettings = () => {
<Dialog>
<DialogTrigger asChild>
<Button variant="secondary" size="sm">
{t("edit")}
</Button>
<Button size="sm">{t("edit")}</Button>
</DialogTrigger>
<DialogContent>
<DialogHeader>{t("sttAiModel")}</DialogHeader>
@@ -352,6 +196,7 @@ const OpenaiSettings = () => {
const { openai, setOpenai } = useContext(AISettingsProviderContext);
const [editing, setEditing] = useState(false);
const ref = useRef<HTMLInputElement>();
const { toast } = useToast();
const handleSave = () => {
if (!ref.current) return;
@@ -361,7 +206,10 @@ const OpenaiSettings = () => {
});
setEditing(false);
toast.success(t("openaiKeySaved"));
toast({
title: t("success"),
description: t("openaiKeySaved"),
});
};
useEffect(() => {
@@ -399,7 +247,7 @@ const OpenaiSettings = () => {
</div>
<div className="">
<Button
variant={editing ? "outline" : "secondary"}
variant={editing ? "secondary" : "default"}
size="sm"
onClick={() => setEditing(!editing)}
>
@@ -416,6 +264,7 @@ const GoogleGenerativeAiSettings = () => {
);
const [editing, setEditing] = useState(false);
const ref = useRef<HTMLInputElement>();
const { toast } = useToast();
const handleSave = () => {
if (!ref.current) return;
@@ -425,7 +274,10 @@ const GoogleGenerativeAiSettings = () => {
});
setEditing(false);
toast.success(t("googleGenerativeAiKeySaved"));
toast({
title: t("success"),
description: t("googleGenerativeAiKeySaved"),
});
};
useEffect(() => {
@@ -463,7 +315,7 @@ const GoogleGenerativeAiSettings = () => {
</div>
<div className="">
<Button
variant={editing ? "outline" : "secondary"}
variant={editing ? "secondary" : "default"}
size="sm"
onClick={() => setEditing(!editing)}
>

View File

@@ -1,23 +1,18 @@
import { t } from "i18next";
import { Button, ScrollArea } from "@renderer/components/ui";
import {
BasicSettings,
AdvancedSettings,
About,
Hotkeys,
} from "@renderer/components";
import { BasicSettings, AdvancedSettings, About, Hotkeys } from "@renderer/components";
import { useState } from "react";
export const Preferences = () => {
const TABS = [
{
value: "basic",
label: t("basicSettingsShort"),
label: t("basicSettings"),
component: () => <BasicSettings />,
},
{
value: "advanced",
label: t("advancedSettingsShort"),
label: t("advancedSettings"),
component: () => <AdvancedSettings />,
},
{
@@ -35,8 +30,8 @@ export const Preferences = () => {
const [activeTab, setActiveTab] = useState<string>("basic");
return (
<div className="grid grid-cols-5 overflow-hidden h-full">
<ScrollArea className="h-full col-span-1 bg-muted/50 p-4">
<div className="grid grid-cols-5">
<ScrollArea className="col-span-1 h-full bg-muted/50 p-4">
<div className="py-2 text-muted-foreground mb-4">
{t("sidebar.preferences")}
</div>
@@ -55,7 +50,7 @@ export const Preferences = () => {
</Button>
))}
</ScrollArea>
<ScrollArea className="h-full col-span-4 py-6 px-10">
<ScrollArea className="col-span-4 p-6">
{TABS.find((tab) => tab.value === activeTab)?.component()}
</ScrollArea>
</div>

View File

@@ -96,7 +96,7 @@ export const PronunciationAssessmentScoreResult = (props: {
</div>
{!pronunciationScore && (
<div className="w-full h-full absolute z-30 bg-background/10 flex items-center justify-center">
<div className="w-full h-full absolute z-30 bg-white/10 flex items-center justify-center">
<Button size="lg" disabled={assessing} onClick={onAssess}>
{assessing && (
<LoaderIcon className="w-4 h-4 animate-spin inline mr-2" />

View File

@@ -4,7 +4,7 @@ import { useState, useEffect, useRef } from "react";
import RecordPlugin from "wavesurfer.js/dist/plugins/record";
import WaveSurfer from "wavesurfer.js";
import { cn } from "@renderer/lib/utils";
import { RadialProgress, toast } from "@renderer/components/ui";
import { RadialProgress, useToast } from "@renderer/components/ui";
import { useHotkeys } from "react-hotkeys-hook";
export const RecordButton = (props: {
@@ -16,6 +16,7 @@ export const RecordButton = (props: {
const { className, disabled, onRecordBegin, onRecordEnd } = props;
const [isRecording, setIsRecording] = useState<boolean>(false);
const [duration, setDuration] = useState<number>(0);
const { toast } = useToast();
useHotkeys(["command+alt+r", "control+alt+r"], () => {
if (disabled) return;
@@ -66,7 +67,10 @@ export const RecordButton = (props: {
if (duration > 1000) {
onRecordEnd(blob, duration);
} else {
toast.warning(t("recordTooShort"));
toast({
description: t("recordTooShort"),
variant: "warning",
});
}
}}
/>

View File

@@ -4,26 +4,18 @@ import { RecordingPlayer } from "@renderer/components";
import {
AlertDialog,
AlertDialogHeader,
AlertDialogTrigger,
AlertDialogDescription,
AlertDialogTitle,
AlertDialogContent,
AlertDialogFooter,
AlertDialogCancel,
AlertDialogAction,
Button,
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
toast,
} from "@renderer/components/ui";
import {
MoreHorizontalIcon,
Trash2Icon,
Share2Icon,
GaugeCircleIcon,
} from "lucide-react";
import { ChevronDownIcon, Trash2Icon, InfoIcon, Share2Icon } from "lucide-react";
import { formatDateTime, secondsToTimestamp } from "@renderer/lib/utils";
import { t } from "i18next";
@@ -34,67 +26,39 @@ export const RecordingCard = (props: {
}) => {
const { recording, id, onSelect } = props;
const [isDeleteDialogOpen, setIsDeleteDialogOpen] = useState(false);
const { EnjoyApp, webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [isPlaying, setIsPlaying] = useState(false);
const handleDelete = () => {
EnjoyApp.recordings.destroy(recording.id);
};
const handleShare = async () => {
if (!recording.updatedAt) {
try {
await EnjoyApp.recordings.upload(recording.id);
} catch (error) {
toast.error(t("shareFailed"), { description: error.message });
return;
}
}
webApi
.createPost({
targetId: recording.id,
targetType: "Recording",
})
.then(() => {
toast.success(t("sharedSuccessfully"), {
description: t("sharedRecording"),
});
})
.catch((error) => {
toast.error(t("shareFailed"), {
description: error.message,
});
});
};
return (
<div id={id} className="flex items-center justify-end px-4 transition-all">
<div className="w-full">
<div className="bg-background rounded-lg py-2 px-4 relative mb-1">
<div className="flex items-center justify-end space-x-2">
<span className="text-xs text-muted-foreground">
{secondsToTimestamp(recording.duration / 1000)}
</span>
</div>
<DropdownMenu>
<div className="w-full">
<div className="bg-white rounded-lg py-2 px-4 relative mb-1">
<div className="flex items-center justify-end space-x-2">
<span className="text-xs text-muted-foreground">
{secondsToTimestamp(recording.duration / 1000)}
</span>
</div>
<RecordingPlayer
recording={recording}
isPlaying={isPlaying}
setIsPlaying={setIsPlaying}
/>
<RecordingPlayer
recording={recording}
isPlaying={isPlaying}
setIsPlaying={setIsPlaying}
/>
<div className="flex items-center justify-end space-x-2">
<Button
data-tooltip-id="global-tooltip"
data-tooltip-content={t("pronunciationAssessment")}
data-tooltip-place="bottom"
onClick={onSelect}
variant="ghost"
size="sm"
className="p-1 h-6"
>
<GaugeCircleIcon
className={`w-4 h-4
<div className="flex items-center justify-end space-x-2">
<Button
onClick={onSelect}
variant="ghost"
size="sm"
className="p-1 h-6"
>
<InfoIcon
className={`w-4 h-4
${
recording.pronunciationAssessment
? recording.pronunciationAssessment
@@ -107,60 +71,29 @@ export const RecordingCard = (props: {
: "text-muted-foreground"
}
`}
/>
</Button>
<AlertDialog>
<AlertDialogTrigger asChild>
<Button
data-tooltip-id="global-tooltip"
data-tooltip-content={t("share")}
data-tooltip-place="bottom"
variant="ghost"
size="sm"
className="p-1 h-6"
>
<Share2Icon className="w-4 h-4 text-muted-foreground" />
</Button>
</AlertDialogTrigger>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("shareRecording")}</AlertDialogTitle>
<AlertDialogDescription>
{t("areYouSureToShareThisRecordingToCommunity")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<AlertDialogAction asChild>
<Button onClick={handleShare}>{t("share")}</Button>
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
<DropdownMenu>
/>
</Button>
<DropdownMenuTrigger>
<MoreHorizontalIcon className="w-4 h-4 text-muted-foreground" />
<ChevronDownIcon className="w-4 h-4 text-muted-foreground" />
</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuItem onClick={() => setIsDeleteDialogOpen(true)}>
<span className="mr-auto text-destructive capitalize">
{t("delete")}
</span>
<Trash2Icon className="w-4 h-4 text-destructive" />
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
</div>
<div className="flex justify-end">
<span className="text-xs text-muted-foreground">
{formatDateTime(recording.createdAt)}
</span>
</div>
</div>
<div className="flex justify-end">
<span className="text-xs text-muted-foreground">
{formatDateTime(recording.createdAt)}
</span>
</div>
</div>
<DropdownMenuContent>
<DropdownMenuItem onClick={() => setIsDeleteDialogOpen(true)}>
<span className="mr-auto text-destructive capitalize">
{t("delete")}
</span>
<Trash2Icon className="w-4 h-4 text-destructive" />
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
<AlertDialog
open={isDeleteDialogOpen}

View File

@@ -1,7 +1,7 @@
import { useEffect, useState, useRef, useCallback } from "react";
import WaveSurfer from "wavesurfer.js";
import { PitchContour } from "@renderer/components";
import { Button, Skeleton } from "@renderer/components/ui";
import { Button } from "@renderer/components/ui";
import { PlayIcon, PauseIcon } from "lucide-react";
import { useIntersectionObserver } from "@uidotdev/usehooks";
@@ -30,7 +30,6 @@ export const RecordingPlayer = (props: {
const [ref, entry] = useIntersectionObserver({
threshold: 0,
});
const [initialized, setInitialized] = useState(false);
const onPlayClick = useCallback(() => {
wavesurfer.isPlaying() ? wavesurfer.pause() : wavesurfer.play();
@@ -41,7 +40,6 @@ export const RecordingPlayer = (props: {
// when the player is visible
if (!entry?.isIntersecting) return;
if (!recording?.src) return;
if (wavesurfer) return;
const ws = WaveSurfer.create({
container: containerRef.current,
@@ -80,7 +78,6 @@ export const RecordingPlayer = (props: {
height,
})
);
setInitialized(true);
}),
];
@@ -108,15 +105,7 @@ export const RecordingPlayer = (props: {
return (
<div ref={ref} className="grid grid-cols-11 xl:grid-cols-12 items-center">
{!initialized && (
<div className="col-span-9 flex flex-col justify-around h-[80px]">
<Skeleton className="h-3 w-full rounded-full" />
<Skeleton className="h-3 w-full rounded-full" />
<Skeleton className="h-3 w-full rounded-full" />
</div>
)}
<div className={`flex justify-center ${initialized ? "" : "hidden"}`}>
<div className="flex justify-center">
<Button
onClick={onPlayClick}
className="aspect-square rounded-full p-2 w-12 h-12 bg-blue-600 hover:bg-blue-500"
@@ -129,10 +118,7 @@ export const RecordingPlayer = (props: {
</Button>
</div>
<div
className={`col-span-10 xl:col-span-11 ${initialized ? "" : "hidden"}`}
ref={containerRef}
></div>
<div className="col-span-10 xl:col-span-11" ref={containerRef}></div>
</div>
);
};

View File

@@ -44,35 +44,3 @@ export const ResetAllButton = (props: { children: React.ReactNode }) => {
</AlertDialog>
);
};
export const ResetSettingsButton = (props: { children: React.ReactNode }) => {
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const reset = () => {
EnjoyApp.app.resetSettings();
};
return (
<AlertDialog>
<AlertDialogTrigger asChild>{props.children}</AlertDialogTrigger>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("resetSettings")}</AlertDialogTitle>
</AlertDialogHeader>
<AlertDialogDescription>
{t("resetSettingsConfirmation")}
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<AlertDialogAction
className="bg-destructive hover:bg-destructive-hover"
onClick={reset}
>
{t("resetSettings")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
);
};

View File

@@ -14,7 +14,6 @@ import {
BookMarkedIcon,
UserIcon,
BotIcon,
UsersRoundIcon,
} from "lucide-react";
import { useLocation, Link } from "react-router-dom";
import { t } from "i18next";
@@ -51,21 +50,6 @@ export const Sidebar = () => {
<span className="hidden xl:block">{t("sidebar.home")}</span>
</Button>
</Link>
<Link
to="/community"
data-tooltip-id="sidebar-tooltip"
data-tooltip-content={t("sidebar.community")}
className="block"
>
<Button
variant={activeTab === "" ? "secondary" : "ghost"}
className="w-full xl:justify-start"
>
<UsersRoundIcon className="xl:mr-2 h-5 w-5" />
<span className="hidden xl:block">{t("sidebar.community")}</span>
</Button>
</Link>
</div>
</div>

View File

@@ -7,10 +7,10 @@ import { AppSettingsProviderContext } from "@renderer/context";
export const StoriesSegment = () => {
const [stories, setStorys] = useState<StoryType[]>([]);
const { webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const fetchStorys = async () => {
webApi.mineStories().then((response) => {
EnjoyApp.webApi.mineStories().then((response) => {
if (response?.stories) {
setStorys(response.stories);
}

View File

@@ -2,16 +2,6 @@ import {
Alert,
AlertTitle,
AlertDescription,
AlertDialog,
AlertDialogTrigger,
AlertDialogContent,
AlertDialogHeader,
AlertDialogTitle,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogCancel,
AlertDialogAction,
Button,
ScrollArea,
Separator,
Sheet,
@@ -27,7 +17,6 @@ import {
ScanTextIcon,
LoaderIcon,
StarIcon,
Share2Icon,
} from "lucide-react";
import { t } from "i18next";
@@ -47,7 +36,6 @@ export const StoryToolbar = (props: {
marked?: boolean;
toggleMarked?: () => void;
pendingLookups?: LookupType[];
handleShare?: () => void;
}) => {
const {
starred,
@@ -59,7 +47,6 @@ export const StoryToolbar = (props: {
toggleMarked,
meanings = [],
pendingLookups = [],
handleShare,
} = props;
const [vocabularyVisible, setVocabularyVisible] = useState<boolean>(
@@ -89,27 +76,6 @@ export const StoryToolbar = (props: {
<ToolbarButton toggled={starred} onClick={toggleStarred}>
<StarIcon className="w-6 h-6" />
</ToolbarButton>
<AlertDialog>
<AlertDialogTrigger asChild>
<ToolbarButton toggled={false} onClick={toggleStarred}>
<Share2Icon className="w-6 h-6" />
</ToolbarButton>
</AlertDialogTrigger>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("shareStory")}</AlertDialogTitle>
<AlertDialogDescription>
{t("areYouSureToShareThisStoryToCommunity")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<AlertDialogAction>
<Button onClick={handleShare}>{t("share")}</Button>
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
</FloatingToolbar>
<Sheet

View File

@@ -13,7 +13,7 @@ import { debounce , uniq } from "lodash";
import Mark from "mark.js";
export const StoryViewer = (props: {
story: Partial<StoryType> & Partial<CreateStoryParamsType>;
story: StoryType & Partial<CreateStoryParamsType>;
marked?: boolean;
meanings?: MeaningType[];
setMeanings: (meanings: MeaningType[]) => void;
@@ -96,7 +96,7 @@ export const StoryViewer = (props: {
return (
<>
<div className="w-full max-w-2xl xl:max-w-3xl mx-auto sticky bg-background top-0 z-30 px-4 py-2 border-b">
<div className="w-full max-w-2xl xl:max-w-3xl mx-auto sticky bg-white top-0 z-30 px-4 py-2 border-b">
<div className="w-full flex items-center space-x-4">
<Button
variant="ghost"
@@ -130,10 +130,10 @@ export const StoryViewer = (props: {
</div>
</div>
</div>
<div className="bg-background py-6 px-8 max-w-2xl xl:max-w-3xl mx-auto relative shadow-lg">
<div className="bg-white py-6 px-8 max-w-2xl xl:max-w-3xl mx-auto relative shadow-lg">
<article
ref={ref}
className="relative select-text prose dark:prose-invert prose-lg xl:prose-xl font-serif text-lg"
className="relative select-text prose prose-lg xl:prose-xl font-serif text-lg"
>
<h2>
{story.title.split(" ").map((word, i) => (

View File

@@ -41,8 +41,8 @@ export const ToolbarButton = (props: {
className={cn(
`rounded-full p-3 h-12 w-12 ${
toggled
? "bg-primary dark:bg-background text-background dark:text-foreground"
: "bg-background dark:bg-muted text-muted-foreground hover:text-background "
? "bg-primary text-white"
: "bg-white text-muted-foreground hover:text-white "
}`,
className
)}

View File

@@ -13,9 +13,9 @@ export * from "./input";
export * from "./avatar";
export * from "./alert-dialog";
export * from "./card";
// export * from "./toast";
// export * from "./use-toast";
// export * from "./toaster";
export * from "./toast";
export * from "./use-toast";
export * from "./toaster";
export * from "./toggle";
export * from "./radio-group";
export * from "./scroll-area";
@@ -33,4 +33,3 @@ export * from "./select";
export * from "./sheet";
export * from "./hover-card";
export * from "./floating-toolbar";
export * from "./sonner";

View File

@@ -1,31 +0,0 @@
"use client";
import { useTheme } from "next-themes";
import { Toaster as Sonner, toast } from "sonner";
type ToasterProps = React.ComponentProps<typeof Sonner>;
const Toaster = ({ ...props }: ToasterProps) => {
const { theme = "light" } = useTheme();
return (
<Sonner
theme={theme as ToasterProps["theme"]}
className="toaster group"
toastOptions={{
classNames: {
toast:
"group toast group-[.toaster]:bg-background group-[.toaster]:text-foreground group-[.toaster]:border-border group-[.toaster]:shadow-lg",
description: "group-[.toast]:text-muted-foreground",
actionButton:
"group-[.toast]:bg-primary group-[.toast]:text-primary-foreground",
cancelButton:
"group-[.toast]:bg-muted group-[.toast]:text-muted-foreground",
},
}}
{...props}
/>
);
};
export { Toaster, toast };

View File

@@ -1 +0,0 @@
export * from './users-rankings';

View File

@@ -1,83 +0,0 @@
import { useContext, useEffect, useState } from "react";
import {
Avatar,
AvatarImage,
AvatarFallback,
Card,
CardTitle,
CardHeader,
CardContent,
} from "@renderer/components/ui";
import { AppSettingsProviderContext } from "@renderer/context";
import { t } from "i18next";
import { formatDuration } from "@renderer/lib/utils";
export const UsersRankings = () => {
return (
<div className="grid grid-cols-2 gap-6 mb-6">
<RankingsCard range="day" />
<RankingsCard range="week" />
<RankingsCard range="month" />
<RankingsCard range="all" />
</div>
);
};
const RankingsCard = (props: {
range: "day" | "week" | "month" | "year" | "all";
}) => {
const { range } = props;
const { webApi } = useContext(AppSettingsProviderContext);
const [rankings, setRankings] = useState<UserType[]>([]);
const fetchRankings = async () => {
webApi.rankings(range).then(
(res) => {
setRankings(res.rankings);
},
(err) => {
console.error(err);
}
);
};
useEffect(() => {
fetchRankings();
}, []);
return (
<Card>
<CardHeader>
<CardTitle>{t(`${range}Rankings`)}</CardTitle>
</CardHeader>
<CardContent>
{rankings.length === 0 && (
<div className="text-center text-gray-500">
{t("noOneHasRecordedYet")}
</div>
)}
{rankings.map((user, index) => (
<div key={user.id} className="flex items-center space-x-4 p-2">
<div className="font-mono text-sm">#{index + 1}</div>
<div className="flex items-center space-x-2">
<Avatar className="w-8 h-8">
<AvatarImage src={user.avatarUrl} />
<AvatarFallback className="text-xl">
{user.name[0].toUpperCase()}
</AvatarFallback>
</Avatar>
<div className="max-w-20 truncate">{user.name}</div>
</div>
<div className="flex-1 font-serif text-right">
{formatDuration(user.recordingsDuration, "millisecond")}
</div>
</div>
))}
</CardContent>
</Card>
);
};

View File

@@ -11,29 +11,16 @@ import {
MediaTranscription,
} from "@renderer/components";
import { LoaderIcon } from "lucide-react";
import {
AlertDialog,
AlertDialogHeader,
AlertDialogDescription,
AlertDialogTitle,
AlertDialogContent,
AlertDialogFooter,
AlertDialogCancel,
Button,
ScrollArea,
toast,
} from "@renderer/components/ui";
import { t } from "i18next";
import { ScrollArea } from "@renderer/components/ui";
export const VideoDetail = (props: { id?: string; md5?: string }) => {
const { id, md5 } = props;
const { addDblistener, removeDbListener } = useContext(DbProviderContext);
const { EnjoyApp, webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [video, setVideo] = useState<VideoType | null>(null);
const [transcription, setTranscription] = useState<TranscriptionType>(null);
const [initialized, setInitialized] = useState<boolean>(false);
const [sharing, setSharing] = useState<boolean>(false);
// Player controls
const [currentTime, setCurrentTime] = useState<number>(0);
@@ -48,8 +35,6 @@ export const VideoDetail = (props: { id?: string; md5?: string }) => {
const [isPlaying, setIsPlaying] = useState(false);
const [isLooping, setIsLooping] = useState(false);
const [playBackRate, setPlaybackRate] = useState<number>(1);
const [displayInlineCaption, setDisplayInlineCaption] =
useState<boolean>(true);
const onTransactionUpdate = (event: CustomEvent) => {
const { model, action, record } = event.detail || {};
@@ -58,39 +43,6 @@ export const VideoDetail = (props: { id?: string; md5?: string }) => {
}
};
const handleShare = async () => {
if (!video.source.startsWith("http")) {
toast.error(t("shareFailed"), {
description: t("cannotShareLocalVideo"),
});
return;
}
if (!video.source && !video.isUploaded) {
try {
await EnjoyApp.videos.upload(video.id);
} catch (err) {
toast.error(t("shareFailed"), { description: err.message });
return;
}
}
webApi
.createPost({
targetType: "Video",
targetId: video.id,
})
.then(() => {
toast.success(t("sharedSuccessfully"), {
description: t("sharedVideo"),
});
})
.catch((err) => {
toast.error(t("shareFailed"), { description: err.message });
});
setSharing(false);
};
useEffect(() => {
const where = id ? { id } : { md5 };
EnjoyApp.videos.findOne(where).then((video) => {
@@ -138,7 +90,7 @@ export const VideoDetail = (props: { id?: string; md5?: string }) => {
mediaId={video.id}
mediaType="Video"
mediaUrl={video.src}
mediaMd5={video.md5}
waveformCacheKey={`waveform-video-${video.md5}`}
transcription={transcription}
currentTime={currentTime}
setCurrentTime={setCurrentTime}
@@ -157,9 +109,6 @@ export const VideoDetail = (props: { id?: string; md5?: string }) => {
setIsLooping={setIsLooping}
playBackRate={playBackRate}
setPlaybackRate={setPlaybackRate}
displayInlineCaption={displayInlineCaption}
setDisplayInlineCaption={setDisplayInlineCaption}
onShare={() => setSharing(true)}
/>
<ScrollArea
@@ -200,25 +149,8 @@ export const VideoDetail = (props: { id?: string; md5?: string }) => {
</div>
</div>
<AlertDialog open={sharing} onOpenChange={(value) => setSharing(value)}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("shareAudio")}</AlertDialogTitle>
<AlertDialogDescription>
{t("areYouSureToShareThisAudioToCommunity")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>{t("cancel")}</AlertDialogCancel>
<Button variant="default" onClick={handleShare}>
{t("share")}
</Button>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
{!initialized && (
<div className="top-0 w-full h-full absolute z-30 bg-background/10 flex items-center justify-center">
<div className="top-0 w-full h-full absolute z-30 bg-white/10 flex items-center justify-center">
<LoaderIcon className="text-muted-foreground animate-spin w-8 h-8" />
</div>
)}

View File

@@ -4,7 +4,6 @@ import {
VideosTable,
VideoEditForm,
AddMediaButton,
LoaderSpin,
} from "@renderer/components";
import { t } from "i18next";
import {
@@ -20,12 +19,10 @@ import {
AlertDialogDescription,
AlertDialogCancel,
AlertDialogAction,
Button,
Dialog,
DialogContent,
DialogHeader,
DialogTitle,
toast,
} from "@renderer/components/ui";
import {
DbProviderContext,
@@ -46,11 +43,12 @@ export const VideosComponent = () => {
const { addDblistener, removeDbListener } = useContext(DbProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [offset, setOffest] = useState(0);
const [loading, setLoading] = useState(false);
const navigate = useNavigate();
useEffect(() => {
fetchVideos();
}, []);
useEffect(() => {
addDblistener(onVideosUpdate);
fetchVideos();
@@ -61,36 +59,12 @@ export const VideosComponent = () => {
}, []);
const fetchVideos = async () => {
if (loading) return;
if (offset === -1) return;
const videos = await EnjoyApp.videos.findAll({
limit: 10,
});
if (!videos) return;
setLoading(true);
const limit = 10;
EnjoyApp.videos
.findAll({
offset,
limit,
})
.then((_videos) => {
if (_videos.length === 0) {
setOffest(-1);
return;
}
if (_videos.length < limit) {
setOffest(-1);
} else {
setOffest(offset + _videos.length);
}
dispatchVideos({ type: "append", records: _videos });
})
.catch((err) => {
toast.error(err.message);
})
.finally(() => {
setLoading(false);
});
dispatchVideos({ type: "set", records: videos });
};
const onVideosUpdate = (event: CustomEvent) => {
@@ -119,8 +93,6 @@ export const VideosComponent = () => {
};
if (videos.length === 0) {
if (loading) return <LoaderSpin />;
return (
<div className="flex items-center justify-center h-48 border border-dashed rounded-lg">
<AddMediaButton />
@@ -163,14 +135,6 @@ export const VideosComponent = () => {
</Tabs>
</div>
{offset > -1 && (
<div className="flex items-center justify-center my-4">
<Button variant="link" onClick={fetchVideos}>
{t("loadMore")}
</Button>
</div>
)}
<Dialog
open={!!editing}
onOpenChange={(value) => {

View File

@@ -14,7 +14,7 @@ import {
CardContent,
CardFooter,
ScrollArea,
toast,
useToast,
Progress,
} from "@renderer/components/ui";
import { t } from "i18next";
@@ -67,8 +67,10 @@ export const WhisperModelOptionsPanel = () => {
export const WhisperModelOptions = () => {
const [selectingModel, setSelectingModel] = useState<ModelType | null>(null);
const [availableModels, setAvailableModels] = useState<ModelType[]>([]);
const { whisperModelsPath, whisperModel, setWhisperModel, EnjoyApp } =
useContext(AppSettingsProviderContext);
const { whisperModelsPath, whisperModel, setWhisperModel, EnjoyApp } = useContext(
AppSettingsProviderContext
);
const { toast } = useToast();
useEffect(() => {
updateAvailableModels();
@@ -124,7 +126,10 @@ export const WhisperModelOptions = () => {
if (option.downloaded) {
setWhisperModel(option.name);
} else if (option.downloadState) {
toast.warning(t("downloading", { file: option.name }));
toast({
title: "Downloading",
description: `${option.name} is downloading...`,
});
} else {
setSelectingModel(option);
}

View File

@@ -1,10 +1,6 @@
import { createContext, useEffect, useState } from "react";
import { WEB_API_URL } from "@/constants";
import { Client } from "@/api";
import i18n from "@renderer/i18n";
type AppSettingsProviderState = {
webApi: Client;
user: UserType | null;
initialized: boolean;
version?: string;
@@ -18,12 +14,9 @@ type AppSettingsProviderState = {
ffmpegConfig?: FfmpegConfigType;
setFfmegConfig?: (config: FfmpegConfigType) => void;
EnjoyApp?: EnjoyAppType;
language?: "en" | "zh-CN";
switchLanguage?: (language: "en" | "zh-CN") => void;
};
const initialState: AppSettingsProviderState = {
webApi: null,
user: null,
initialized: false,
};
@@ -38,14 +31,11 @@ export const AppSettingsProvider = ({
}) => {
const [initialized, setInitialized] = useState<boolean>(false);
const [version, setVersion] = useState<string>("");
const [apiUrl, setApiUrl] = useState<string>(WEB_API_URL);
const [webApi, setWebApi] = useState<Client>(null);
const [user, setUser] = useState<UserType | null>(null);
const [libraryPath, setLibraryPath] = useState("");
const [whisperModelsPath, setWhisperModelsPath] = useState<string>("");
const [whisperModel, setWhisperModel] = useState<string>(null);
const [ffmpegConfig, setFfmegConfig] = useState<FfmpegConfigType>(null);
const [language, setLanguage] = useState<"en" | "zh-CN">();
const EnjoyApp = window.__ENJOY_APP__;
useEffect(() => {
@@ -54,7 +44,6 @@ export const AppSettingsProvider = ({
fetchLibraryPath();
fetchModel();
fetchFfmpegConfig();
fetchLanguage();
}, []);
useEffect(() => {
@@ -65,30 +54,6 @@ export const AppSettingsProvider = ({
validate();
}, [user, libraryPath, whisperModel, ffmpegConfig]);
useEffect(() => {
if (!apiUrl) return;
setWebApi(
new Client({
baseUrl: apiUrl,
accessToken: user?.accessToken,
})
);
}, [user, apiUrl]);
const fetchLanguage = async () => {
const language = await EnjoyApp.settings.getLanguage();
setLanguage(language as "en" | "zh-CN");
i18n.changeLanguage(language);
};
const switchLanguage = (language: "en" | "zh-CN") => {
EnjoyApp.settings.switchLanguage(language).then(() => {
i18n.changeLanguage(language);
setLanguage(language);
});
};
const fetchFfmpegConfig = async () => {
const config = await EnjoyApp.settings.getFfmpegConfig();
setFfmegConfig(config);
@@ -100,18 +65,10 @@ export const AppSettingsProvider = ({
};
const fetchUser = async () => {
const apiUrl = await EnjoyApp.app.apiUrl();
setApiUrl(apiUrl);
const currentUser = await EnjoyApp.settings.getUser();
if (!currentUser) return;
const client = new Client({
baseUrl: apiUrl,
accessToken: currentUser.accessToken,
});
client.me().then((user) => {
EnjoyApp.webApi.me().then((user) => {
if (user?.id) {
login(currentUser);
} else {
@@ -150,10 +107,6 @@ export const AppSettingsProvider = ({
setWhisperModel(whisperModel);
};
const fetchApiUrl = async () => {
return apiUrl;
};
const setModelHandler = async (name: string) => {
await EnjoyApp.settings.setWhisperModel(name);
setWhisperModel(name);
@@ -168,11 +121,8 @@ export const AppSettingsProvider = ({
return (
<AppSettingsProviderContext.Provider
value={{
language,
switchLanguage,
EnjoyApp,
version,
webApi,
user,
login,
logout,

View File

@@ -19,9 +19,10 @@ i18n
.use(initReactI18next) // passes i18n down to react-i18next
.init({
resources,
lng: "en",
supportedLngs: ["en", "zh-CN"],
fallbackLng: "en",
lng: "zh-CN", // language to use, more information here: https://www.i18next.com/overview/configuration-options#languages-namespaces-resources
// you can use the i18n.changeLanguage function to change the language manually: https://www.i18next.com/overview/api#changelanguage
// if you're using a language detector, do not define the lng option
interpolation: {
escapeValue: false, // react already safes from xss
},

View File

@@ -3,12 +3,10 @@ import { twMerge } from "tailwind-merge";
import dayjs from "dayjs";
import localizedFormat from "dayjs/plugin/localizedFormat";
import relativeTime from "dayjs/plugin/relativeTime";
import duration, { type DurationUnitType } from "dayjs/plugin/duration";
import "dayjs/locale/en";
import "dayjs/locale/zh-cn";
import i18next, { t } from "i18next";
dayjs.extend(localizedFormat);
dayjs.extend(duration);
dayjs.extend(relativeTime);
export function cn(...inputs: ClassValue[]) {
@@ -20,23 +18,6 @@ export function secondsToTimestamp(seconds: number) {
return date.toISOString().substr(11, 8);
}
export function humanizeDuration(
duration: number,
unit: DurationUnitType = "second"
) {
dayjs.locale(i18next.resolvedLanguage?.toLowerCase() || "en");
return dayjs.duration(duration, unit).humanize();
}
export function formatDuration(
duration: number,
unit: DurationUnitType = "second",
format = "HH:mm:ss"
) {
dayjs.locale(i18next.resolvedLanguage?.toLowerCase() || "en");
return dayjs.duration(duration, unit).format(format);
}
export function bytesToSize(bytes: number) {
const sizes = ["Bytes", "KB", "MB", "GB", "TB"];
if (bytes === 0) {

View File

@@ -1,49 +0,0 @@
import {
Button,
Tabs,
TabsList,
TabsContent,
TabsTrigger,
} from "@renderer/components/ui";
import { UsersRankings, Posts } from "@renderer/components";
import { ChevronLeftIcon } from "lucide-react";
import { useNavigate } from "react-router-dom";
import { t } from "i18next";
export default () => {
const navigate = useNavigate();
return (
<div className="bg-muted h-full px-4 lg:px-8 py-6">
<div className="max-w-screen-md mx-auto mb-6">
<div className="flex space-x-1 items-center mb-4">
<Button variant="ghost" size="icon" onClick={() => navigate(-1)}>
<ChevronLeftIcon className="w-5 h-5" />
</Button>
<span>{t("sidebar.community")}</span>
</div>
<Tabs defaultValue="square">
<TabsList className="mb-4">
<TabsTrigger value="square">{t("square")}</TabsTrigger>
<TabsTrigger
value="rankings"
disabled
className="cursor-not-allowed"
data-tooltip-id="global-tooltip"
data-tooltip-content={t("comingSoon")}
>
{t("rankings")}
</TabsTrigger>
</TabsList>
<TabsContent value="square">
<Posts />
</TabsContent>
<TabsContent value="rankings"></TabsContent>
</Tabs>
</div>
</div>
);
};

View File

@@ -6,9 +6,13 @@ import {
Sheet,
SheetContent,
SheetTrigger,
toast,
useToast,
} from "@renderer/components/ui";
import { MessageComponent, ConversationForm } from "@renderer/components";
import {
MessageComponent,
ConversationForm,
SpeechForm,
} from "@renderer/components";
import { SendIcon, BotIcon, LoaderIcon, SettingsIcon } from "lucide-react";
import { Link, useParams } from "react-router-dom";
import { t } from "i18next";
@@ -28,6 +32,7 @@ export default () => {
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [content, setConent] = useState<string>("");
const [submitting, setSubmitting] = useState<boolean>(false);
const { toast } = useToast();
const [messages, dispatchMessages] = useReducer(messagesReducer, []);
const [offset, setOffest] = useState(0);
@@ -77,7 +82,10 @@ export default () => {
const handleSubmit = async (text?: string, file?: string) => {
if (submitting) {
toast.warning(t("anotherRequestIsPending"));
toast({
title: t("warning"),
description: t("anotherRequestIsPending"),
});
}
text = text ? text : content;
@@ -271,7 +279,7 @@ export default () => {
</ScrollArea>
<div className="px-4 absolute w-full bottom-0 left-0 h-14 bg-muted z-50">
<div className="focus-within:bg-background px-4 py-2 flex items-center space-x-4 rounded-lg border">
<div className="focus-within:bg-white px-4 py-2 flex items-center space-x-4 rounded-lg border">
<Textarea
rows={1}
ref={inputRef}
@@ -279,7 +287,7 @@ export default () => {
value={content}
onChange={(e) => setConent(e.target.value)}
placeholder={t("pressEnterToSend")}
className="px-0 py-0 shadow-none border-none focus-visible:outline-0 focus-visible:ring-0 border-none bg-muted focus:bg-background min-h-[1.25rem] max-h-[3.5rem] !overflow-x-hidden"
className="px-0 py-0 shadow-none border-none focus-visible:outline-0 focus-visible:ring-0 border-none bg-muted focus:bg-white min-h-[1.25rem] max-h-[3.5rem] !overflow-x-hidden"
/>
<Button
type="submit"

View File

@@ -86,7 +86,7 @@ export default () => {
{conversations.map((conversation) => (
<Link key={conversation.id} to={`/conversations/${conversation.id}`}>
<div
className="bg-background text-muted-foreground rounded-full w-full mb-2 p-4 hover:bg-primary hover:text-muted cursor-pointer flex items-center"
className="bg-white text-primary rounded-full w-full mb-2 p-4 hover:bg-primary hover:text-white cursor-pointer flex items-center"
style={{
borderLeftColor: `#${conversation.id
.replaceAll("-", "")

View File

@@ -6,6 +6,7 @@ import {
LoginForm,
ChooseLibraryPathInput,
WhisperModelOptionsPanel,
UserCard,
FfmpegCheck,
} from "@renderer/components";
import { AppSettingsProviderContext } from "@renderer/context";
@@ -92,7 +93,7 @@ export default () => {
</div>
</div>
<div className="flex-1 flex justify-center items-center">
{currentStep == 1 && <LoginForm />}
{currentStep == 1 && (user ? <UserCard user={user} /> : <LoginForm />)}
{currentStep == 2 && <ChooseLibraryPathInput />}
{currentStep == 3 && <WhisperModelOptionsPanel />}
{currentStep == 4 && <FfmpegCheck />}

View File

@@ -1,25 +1,19 @@
import { Button } from "@renderer/components/ui";
import { StoryForm, StoryCard, LoaderSpin } from "@renderer/components";
import { useState, useContext, useEffect } from "react";
import { AppSettingsProviderContext } from "@renderer/context";
import { t } from "i18next";
export default () => {
const [stories, setStorys] = useState<StoryType[]>([]);
const [loading, setLoading] = useState<boolean>(true);
const { webApi } = useContext(AppSettingsProviderContext);
const [nextPage, setNextPage] = useState(1);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const fetchStories = async (page: number = nextPage) => {
if (!page) return;
webApi
const fetchStorys = async () => {
EnjoyApp.webApi
.mineStories()
.then((response) => {
if (response?.stories) {
setStorys([...stories, ...response.stories]);
setStorys(response.stories);
}
setNextPage(response.next);
})
.finally(() => {
setLoading(false);
@@ -27,7 +21,7 @@ export default () => {
};
useEffect(() => {
fetchStories();
fetchStorys();
}, []);
return (
@@ -44,14 +38,6 @@ export default () => {
))}
</div>
)}
{nextPage && (
<div className="py-4 flex justify-center">
<Button variant="link" onClick={() => fetchStories(nextPage)}>
{t("loadMore")}
</Button>
</div>
)}
</div>
);
};

View File

@@ -1,4 +1,4 @@
import { Input, Button, ScrollArea, toast } from "@renderer/components/ui";
import { Input, Button, ScrollArea, useToast } from "@renderer/components/ui";
import {
LoaderSpin,
StoryViewer,
@@ -26,7 +26,8 @@ export default () => {
});
const [loading, setLoading] = useState(true);
const [readable, setReadable] = useState(true);
const { EnjoyApp, webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const { toast } = useToast();
const [meanings, setMeanings] = useState<MeaningType[]>([]);
const [marked, setMarked] = useState<boolean>(false);
const [doc, setDoc] = useState<any>(null);
@@ -51,7 +52,7 @@ export default () => {
const createStory = async () => {
if (!story) return;
webApi
EnjoyApp.webApi
.createStory({
url: story.metadata?.url || story.url,
...story,
@@ -72,7 +73,10 @@ export default () => {
if (state == "did-fail-load") {
setLoading(false);
if (error) {
toast.error(error);
toast({
title: error,
variant: "destructive",
});
setError(error);
}

View File

@@ -1,5 +1,5 @@
import { t } from "i18next";
import { ScrollArea, toast } from "@renderer/components/ui";
import { ScrollArea } from "@renderer/components/ui";
import {
LoaderSpin,
PagePlaceholder,
@@ -16,7 +16,7 @@ nlp.plugin(paragraphs);
let timeout: NodeJS.Timeout = null;
export default () => {
const { id } = useParams<{ id: string }>();
const { webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [loading, setLoading] = useState<boolean>(true);
const [story, setStory] = useState<StoryType>();
const [meanings, setMeanings] = useState<MeaningType[]>([]);
@@ -26,7 +26,7 @@ export default () => {
const [doc, setDoc] = useState<any>(null);
const fetchStory = async () => {
webApi
EnjoyApp.webApi
.story(id)
.then((story) => {
setStory(story);
@@ -41,7 +41,7 @@ export default () => {
const fetchMeanings = async () => {
setScanning(true);
webApi
EnjoyApp.webApi
.storyMeanings(id, { items: 500 })
.then((response) => {
if (!response) return;
@@ -88,14 +88,14 @@ export default () => {
});
});
webApi.lookupInBatch(vocabulary).then((response) => {
EnjoyApp.webApi.lookupInBatch(vocabulary).then((response) => {
const { errors } = response;
if (errors.length > 0) {
console.warn(errors);
return;
}
webApi.extractVocabularyFromStory(id).then(() => {
EnjoyApp.webApi.extractVocabularyFromStory(id).then(() => {
fetchStory();
if (pendingLookups.length > 0) return;
@@ -108,29 +108,16 @@ export default () => {
if (!story) return;
if (story.starred) {
webApi.unstarStory(id).then((result) => {
EnjoyApp.webApi.unstarStory(id).then((result) => {
setStory({ ...story, starred: result.starred });
});
} else {
webApi.starStory(id).then((result) => {
EnjoyApp.webApi.starStory(id).then((result) => {
setStory({ ...story, starred: result.starred });
});
}
};
const handleShare = async () => {
webApi
.createPost({ targetId: story.id, targetType: "Story" })
.then(() => {
toast.success(t("sharedStory"));
})
.catch((error) => {
toast.error(t("shareFailed"), {
description: error.message,
});
});
};
useEffect(() => {
fetchStory();
fetchMeanings();
@@ -175,7 +162,6 @@ export default () => {
starred={story.starred}
toggleStarred={toggleStarred}
pendingLookups={pendingLookups}
handleShare={handleShare}
/>
<StoryViewer

View File

@@ -11,14 +11,14 @@ export default () => {
const [loading, setLoading] = useState<boolean>(false);
const [meanings, setMeanings] = useState<MeaningType[]>([]);
const { webApi } = useContext(AppSettingsProviderContext);
const { EnjoyApp } = useContext(AppSettingsProviderContext);
const [currentIndex, setCurrentIndex] = useState<number>(0);
const [nextPage, setNextPage] = useState(1);
const fetchMeanings = async (page: number = nextPage) => {
if (!page) return;
webApi
EnjoyApp.webApi
.mineMeanings({ page, items: 10 })
.then((response) => {
setMeanings([...meanings, ...response.meanings]);
@@ -38,8 +38,7 @@ export default () => {
}
return (
<div className="h-[100vh] bg-muted">
<div className="max-w-screen-md mx-auto px-4 py-6">
<div className="h-[100vh] max-w-screen-md mx-auto px-4 py-6">
<div className="flex space-x-1 items-center mb-4">
<Button variant="ghost" size="icon" onClick={() => navigate(-1)}>
<ChevronLeftIcon className="w-5 h-5" />
@@ -63,7 +62,7 @@ export default () => {
>
<ChevronLeftIcon className="w-5 h-5" />
</Button>
<div className="bg-background flex-1 h-5/6 border p-6 rounded-xl shadow-lg">
<div className="flex-1 h-5/6 border p-6 rounded-xl shadow-lg">
<MeaningMemorizingCard meaning={meanings[currentIndex]} />
</div>
<Button
@@ -84,6 +83,5 @@ export default () => {
</div>
)}
</div>
</div>
);
};

View File

@@ -1,19 +1,12 @@
export const audiosReducer = (
audios: AudioType[],
action: {
type: "append" | "create" | "update" | "destroy" | "set";
type: "create" | "update" | "destroy" | "set";
record?: Partial<AudioType>;
records?: Partial<AudioType>[];
}
) => {
switch (action.type) {
case "append": {
if (action.record) {
return [...audios, action.record];
} else if (action.records) {
return [...audios, ...action.records];
}
}
case "create": {
return [action.record, ...audios];
}

View File

@@ -1,19 +1,12 @@
export const videosReducer = (
videos: VideoType[],
action: {
type: "append" | "create" | "update" | "destroy" | "set";
type: "create" | "update" | "destroy" | "set";
record?: Partial<VideoType>;
records?: Partial<VideoType>[];
}
) => {
switch (action.type) {
case "append": {
if (action.record) {
return [...videos, action.record];
} else if (action.records) {
return [...videos, ...action.records];
}
}
case "create": {
return [action.record, ...videos];
}

View File

@@ -14,7 +14,6 @@ import Story from "./pages/story";
import Books from "./pages/books";
import Profile from "./pages/profile";
import Home from "./pages/home";
import Community from "./pages/community";
import StoryPreview from "./pages/story-preview";
export default createHashRouter([
@@ -24,10 +23,6 @@ export default createHashRouter([
errorElement: <ErrorPage />,
children: [
{ index: true, element: <Home /> },
{
path: "/community",
element: <Community />,
},
{
path: "/profile",
element: <Profile />,

30
enjoy/src/types.d.ts vendored
View File

@@ -75,13 +75,11 @@ type TransactionStateType = {
record?: AudioType | UserType | RecordingType;
};
type FfmpegConfigType = {
os: string;
arch: string;
commandExists: boolean;
ffmpegPath?: string;
ffprobePath?: string;
scanDirs: string[];
ready: boolean;
};
@@ -107,11 +105,37 @@ type MeaningType = {
lookups: LookupType[];
};
type StoryType = {
id: string;
url: string;
title: string;
content: string;
metadata: {
[key: string]: string;
};
vocabulary?: string[];
extracted?: boolean;
starred?: boolean;
createdAt: Date;
updatedAt: Date;
};
type CreateStoryParamsType = {
title: string;
content: string;
url: string;
html: string;
metadata: {
[key: string]: string;
};
};
type PagyResponseType = {
page: number;
next: number | null;
};
type AudibleBookType = {
title: string;
subtitle: string;

View File

@@ -11,7 +11,6 @@ type AudioType = {
transcribing?: boolean;
recordingsCount?: number;
recordingsDuration?: number;
isUploaded?: boolean;
uploadedAt?: Date;
createdAt: Date;
updatedAt: Date;

View File

@@ -1,7 +1,6 @@
type EnjoyAppType = {
app: {
reset: () => Promise<void>;
resetSettings: () => Promise<void>;
relaunch: () => Promise<void>;
reload: () => Promise<void>;
isPackaged: () => Promise<boolean>;
@@ -76,9 +75,7 @@ type EnjoyAppType = {
LlmProviderType
) => Promise<void>;
getFfmpegConfig: () => Promise<FfmpegConfigType>;
setFfmpegConfig: (config: FfmpegConfigType) => Promise<void>;
getLanguage: () => Promise<string>;
switchLanguage: (language: string) => Promise<void>;
setFfmpegConfig: () => Promise<void>;
};
fs: {
ensureDir: (path: string) => Promise<boolean>;
@@ -96,7 +93,7 @@ type EnjoyAppType = {
audios: {
findAll: (params: object) => Promise<AudioType[]>;
findOne: (params: object) => Promise<AudioType>;
create: (uri: string, params?: object) => Promise<AudioType>;
create: (source: string, params?: object) => Promise<AudioType>;
update: (id: string, params: object) => Promise<AudioType | undefined>;
destroy: (id: string) => Promise<undefined>;
transcribe: (id: string) => Promise<void>;
@@ -105,8 +102,8 @@ type EnjoyAppType = {
videos: {
findAll: (params: object) => Promise<VideoType[]>;
findOne: (params: object) => Promise<VideoType>;
create: (uri: string, params?: any) => Promise<VideoType>;
update: (id: string, params: any) => Promise<VideoType | undefined>;
create: (source: string, params?: object) => Promise<VideoType>;
update: (id: string, params: object) => Promise<VideoType | undefined>;
destroy: (id: string) => Promise<undefined>;
transcribe: (id: string) => Promise<void>;
upload: (id: string) => Promise<void>;
@@ -146,9 +143,9 @@ type EnjoyAppType = {
) => Promise<SegementRecordingStatsType>;
};
conversations: {
findAll: (params: any) => Promise<ConversationType[]>;
findOne: (params: any) => Promise<ConversationType>;
create: (params: any) => Promise<ConversationType>;
findAll: (params: object) => Promise<ConversationType[]>;
findOne: (params: object) => Promise<ConversationType>;
create: (params: object) => Promise<ConversationType>;
update: (id: string, params: object) => Promise<ConversationType>;
destroy: (id: string) => Promise<void>;
ask: (
@@ -162,7 +159,7 @@ type EnjoyAppType = {
arrayBuffer: ArrayBuffer;
};
}
) => Promise<MessageType[]>;
) => Promise<MessageType>;
};
messages: {
findAll: (params: object) => Promise<MessageType[]>;
@@ -180,12 +177,6 @@ type EnjoyAppType = {
};
ffmpeg: {
download: () => Promise<FfmpegConfigType>;
check: () => Promise<boolean>;
discover: () => Promise<{
ffmpegPath: string;
ffprobePath: string;
scanDirs: string[];
}>;
};
download: {
onState: (callback: (event, state) => void) => void;
@@ -194,6 +185,70 @@ type EnjoyAppType = {
dashboard: () => Promise<DownloadStateType[]>;
removeAllListeners: () => void;
};
webApi: {
auth: (params: { provider: string; code: string }) => Promise<UserType>;
me: () => Promise<UserType>;
lookup: (params: {
word: string;
context?: string;
sourceId?: string;
sourceType?: string;
}) => Promise<LookupType>;
lookupInBatch: (
params: {
word: string;
context?: string;
sourceId?: string;
sourceType?: string;
}[]
) => Promise<{ successCount: number; errors: string[]; total: number }>;
mineMeanings: (params?: {
page?: number;
items?: number;
sourceId?: string;
sourceType?: string;
}) => Promise<
{
meanings: MeaningType[];
} & PagyResponseType
>;
createStory: (params: {
title: string;
content: string;
url: string;
metadata: {
[key: string]: any;
};
}) => Promise<StoryType>;
extractVocabularyFromStory: (id: string) => Promise<string[]>;
story: (id: string) => Promise<StoryType>;
stories: (params?: { page: number }) => Promise<{
stories: StoryType[];
page: number;
next: number | null;
}>;
mineStories: (params?: { page: number }) => Promise<{
stories: StoryType[];
page: number;
next: number | null;
}>;
storyMeanings: (
storyId: string,
params?: {
page?: number;
items?: number;
sourceId?: string;
sourceType?: string;
}
) => Promise<
{
meanings: MeaningType[];
pendingLookups: LookupType[];
} & PagyResponseType
>;
starStory: (id: string) => Promise<{ starred: boolean }>;
unstarStory: (id: string) => Promise<{ starred: boolean }>;
};
cacheObjects: {
get: (key: string) => Promise<any>;
set: (key: string, value: any, ttl?: number) => Promise<void>;
@@ -205,8 +260,4 @@ type EnjoyAppType = {
process: (params: any) => Promise<void>;
update: (id: string, params: any) => Promise<void>;
};
waveforms: {
find: (id: string) => Promise<WaveFormDataType>;
save: (id: string, data: WaveFormDataType) => Promise<void>;
};
};

View File

@@ -1,10 +0,0 @@
type MediumType = {
id: string;
md5: string;
mediumType: string;
coverUrl?: string;
sourceUrl?: string;
extname?: string;
createdAt: string;
updatedAt: string;
}

View File

@@ -1,17 +0,0 @@
type PostType = {
id: string;
metadata: {
type: 'text' | 'prompt' | 'llm_configuration';
content:
| string
| {
[key: string]: any;
};
};
user: UserType;
targetType?: string;
targetId?: string;
target?: MediumType | StoryType | RecordingType;
createdAt: Date;
updatedAt: Date;
};

View File

@@ -1,12 +1,12 @@
type RecordingType = {
id: string;
filename?: string;
filename: string;
target?: AudioType | (MessageType & any);
targetId: string;
targetType: string;
pronunciationAssessment?: PronunciationAssessmentType & any;
referenceId: number;
referenceText?: string;
segmentIndex: number;
segmentText?: string;
duration?: number;
src?: string;
md5: string;

View File

@@ -1,24 +0,0 @@
type StoryType = {
id: string;
url: string;
title: string;
content: string;
metadata: {
[key: string]: string;
};
vocabulary?: string[];
extracted?: boolean;
starred?: boolean;
createdAt: Date;
updatedAt: Date;
};
type CreateStoryParamsType = {
title: string;
content: string;
url: string;
html: string;
metadata: {
[key: string]: string;
};
};

View File

@@ -3,6 +3,4 @@ type UserType = {
name: string;
avatarUrl?: string;
accessToken?: string;
recordingsCount?: number;
recordingsDuration?: number;
};

View File

@@ -12,7 +12,6 @@ type VideoType = {
transcribing: boolean;
recordingsCount?: number;
recordingsDuration?: number;
isUploaded?: boolean;
uploadedAt?: Date;
createdAt: Date;
updatedAt: Date;

View File

@@ -1,6 +0,0 @@
type WaveFormDataType = {
peaks: number[];
sampleRate: number;
duration: number;
frequencies: number[];
};

View File

@@ -1,97 +0,0 @@
# 说己想说
过往的日子里,人们在 “学习外语” 的时候,最难办的一点是:
> 头脑越复杂的人学习越困难。
反过来,头脑简单的人学外语竟然更快更简单…… 为什么呢?因为他们头脑简单,所以,“想说的话” 都很简单,学起来也更容易。头脑复杂一点的人呢?觉得见面打招呼说 “Hello!” 离开的时候说 “Goodbye!” 算不上是什么 “说外语”。他们 “想要说的话” 相对更难说,意义更深,篇幅更大,结构更复杂,逻辑更严谨…… 起码,他们不大可能满足于只说 “几个词而已”,他们想把某件事儿或者某个想法说清楚 —— 不仅是 “突然之间”,也是 “不知不觉之间”,他们把自己所面临的难度提高到 “不可能” 的地步。
比如,哪怕就上面开篇那一段话,其实没多复杂,也没多深刻,逻辑倒是清晰 —— 但,你自己试试用英语说出来?试试就知道了,小学六年、中学六年、本科四年,十几年的所谓 “学习”,好像并不给力。
“如果我自己有个专属的外教就好了……” —— 谁都有过这样的想法。要么你自己先打打草稿再让他帮你看看,要么干脆让他翻译之后说给你听,而后你再练练…… 可要命的是,你怎么可能有一个专属的随叫随到的外教呢?反正,我没有。这并不仅仅是 “外教贵不贵” 的问题,其实,这更是人口比例问题 —— 中国人口太多。
就算是把所有美国人都请来,也显然不够用。把加拿大所有人口都拉到中国,一个上海市就不够用了(加拿大人口仅相当于上海市人口 —— 两千多万)。再说,把老外请来就可以了么?不一定吧?不是每个老外都识字的、不是每个识字的老外词汇量都足够大的、不是每个词汇量足够大的老外都有文化的(拥有足够的知识储备和思考能力)。
事实上,在中国境内的所谓 “外教”,(我猜)很可能半数以上只不过是徒有虚表而已 —— 很可能全世界都一样 —— 他们其实并没有足够的知识储备,也没有足够的思考能力,更没有足够的学术训练,若是让他们写个托福作文或者 SAT、GRE 作文能得满分的概率实际上很低。这些外教中的绝大多数在美国当初也没能力考进好大学(当然更不见得将来有能力考上研究生或者有能力攻读博士学位)…… 尤其对那些正在准备 TOEFL/SAT/GRE 的中国学生来说,这些 “外教” 之中有多少可以算作是合格的呢?
更进一步,就算有一些确实合格又怎么样呢?数量那么少(肯定是极少数),你我这样的普通人有多大的概率能得到他们的帮助呢?而如果我们真的有机会遇到这样足够合格的外教,我们是否真的能够付得起费用呢(让这样的人免费干活是不是太过分了)?就算我们遇到了且又支付得起费用,我们自己当时的水平是否足以配得上那么昂贵且又珍贵的帮助呢?
**突然之间,人工智能出现了。**
突然之间,**人工智能成了天下最好的 “外语老师”** —— 没有任何中国制造的 “英语老师” 比人工智能更好更管用…… 其实,哪怕任何一个母语使用者(不管是哪一种语言),即,所谓的 “外教”,在 “外语学习” 这一特殊的领域里,都不可能做得比人工智能更好。
你为人工智能设定角色,说:
> 你是我的英语教练。
> 请将我的话改写成英文。
> 不需要逐字翻译。
> 请分析清楚我的内容,而后用英文重新逻辑清晰地组织它。
> 请使用地道的美式英语,纽约腔调。
> 请尽量使用日常词汇,尽量优先使用短语动词或者习惯用语。
> 每个句子最长不应该超过 20 个单词。
然后,你再给它一段你想说的话,比如,就给它文章最开头的那段文字。也许需要略微等待,但它肯定比任何 “人” 的反应都快,它就返回结果,要几个版本就给几个版本…… 以下是某个版本:
> In the past, when people were "learning a foreign language", one of the toughest parts was:
>
> > The more complex-minded individuals found it harder.
>
> Conversely, those with simpler minds learned languages faster... why? Because their thoughts are simple; what they want to say is straightforward and thus easier to learn. What about those with a more intricate mind? To them saying "Hello!" as a greeting or "Goodbye!" when leaving doesn't really count as “speaking a foreign language”. Their desired expressions are relatively difficult, profound in meaning, larger in scope, structurally complicated and logically stringent… At least they ain't likely satisfied by uttering just “a few words"; They aim at making things clear not only out of blue but also subconsciously. In doing so they inadvertently raise the difficulty level for themselves up to an impossible notch.
再进一步,它还能生成语音给我们:
<audio src="audio/b2a0c7a6fcdc2e269f35fd045e5debfe.mp3">
照着读,照着念,反复,直至流利,甚至背诵 —— 完事!
“**想说什么就能说什么**” —— 这实在是太重要了!并且,完全不用担心自己说得对不对,好不好 —— 照着说肯定没毛病 —— 这又是多大的解脱啊?!至于以前你总是担心的是否 “地道”?也全无压力,既然它是人工智能,你要 “纽约腔调” 它就给你 “纽约风格的文本”,你要硅谷的它就给你硅谷的、你要伦敦的、墨尔本的、多伦多的,甚至苏格兰或者爱尔兰的,都行,反正它都会…… 无论你要的是 “日常随意” 还是 “学究口气”,反正什么风格都能给你搞定。
一方面,“最高难度” 降低,而另外一方面,“说自己想说的话” 原本也的确没有多难…… 至少,比想象的容易很多。
传统 “外语教科书”,尤其是 “口语书” 的问题在于,它什么都想教你,毕竟,如果一本 “口语书” 竟然并不全面,那么就根本卖不出去 —— 可实际上,你需要的并不是 “什么都会”,而是 “我想说的,我会说的,我就能说”。
举个例子,一个以 “星巴克” 为场景的对话,若是追求完整的话,感觉上我们所需要学的东西实在是太多了 —— 很奇怪的是,我们在咖啡馆里几乎从来不说 “coffee” 这个词 —— 拿铁、美式咖啡、焦糖玛奇朵、卡布奇诺、脱脂牛奶,低因,糖浆,榛果味糖浆,到底要几泵糖浆…… 可是,对我来说,永远是 lattehotmedium然后呢然后没了真的没了人家看我自己一个人通常不会问几杯如果人家真的问了我可能并不需要说话只需要伸出一个手指头就行了……
同样的道理,当我们在日常相互聊天的过程中,无论多么深入,我们所说的话,其实 “都仅仅是我自己知道的,我自己思考过的内容”,而不是 “无所不知、无所不晓、无所不聊”…… 这就好像我在讲台上讲课一样,我需要做到的是 “在台上的一两个小时里绝对不出错”,而不是 “我什么都知道,我什么都会” —— 我又不是什么 “百科全书”、“搜索引擎”、或者人工智能……
这就是为什么天下没有什么 “口语书” 的确适合你的根本原因 —— 每个人都太不一样了,每个人的感受不同,想法不同,经历不一样,表达方式不一样,哪儿哪儿都不一样,否则,为什么要交流呢?
结果呢?教科书里十句里只有一句我自己用的上的,我想说的十句里有九句教科书里没有…… 多头疼啊?!
所以,必须通过积累为自己创作一个 “**专门为自己定制的口语书**” ——人工智能的出现,完美地解决了这个过往在那么长的时间里令所有人都束手无策的无奈。
历史上,收音机的出现,曾经 “突然增加了多语使用者的数量”,再往前,词典的普及,也发挥过同样的作用,再往后,录音机、电子词典、互联网百科全书、搜索引擎、影视剧的互联网传播,等等等等,都极大提高了多语使用者的数量…… 现在呢?
人工智能的出现,再一次会极大提高多语使用者的数量,并且数量的增加,很可能不止一倍两倍,而是一个或者数个量级的变化。
二十年多前,我曾经慨叹,在教学领域,因为互联网的存在,“每个领域只要有一个老师就够了” —— 我们只需要一个最好的老师,剩下的,都改行吧,干点别的,这样才能更好地贡献社会,这样才有助于提高社会整体效率。
五年过去、十年过去、廿年过去,我的 “预言” 并未成真…… 为什么呢?其实问题并不在于 “互联网是否足够发达”,反思一下,结论是,“谁都没办法证明自己是最好的老师”,同时,“谁都没办法分辨到底谁是最好的老师”……
可这一次不一样,**人工智能是最好的老师** —— 起码,在语言学习方面 —— 因为最近刚刚突然崛起的人工智能所仰仗的就是 “大语言模型”,所以它在文本方面最厉害…… 突然之间,没有哪一个人,无论是不是 “外教”,甚至哪怕 “语言学家”,都比不上人工智能,起码,在 “当外语老师” 这方面。
最惊人的当然是 “**人工智能非常便宜**”,甚至 “便宜得不像话” —— 收费的 OpenAI 每月只需要 20 美元,即,不到 150 元人民币,一年下来也只不过 1,800 元人民币而已…… 并且,一年才 1,800 元费用的人工智能,比什么外教都强。如果仅仅是生成文本的话,还有大量的开源模型可以使用 —— 干脆免费…… 在这种情况下,最大的费用竟然变成了 “买台电脑” 而已。
关键在于,不仅便宜,它还 “不知疲倦”。你只需要告诉它,“帮我检查并修改我的文字中的语法错误,而后把每处修改的原因都另外罗列出来……” —— 你没当过老师,所以你可能不知道,外语老师在 “批改作文” 的时候,要搞死自己多少脑细胞才能完成这样的任务啊!
突然之间,一切都变了。
我在社群里说,我要在 2024 年,改变社群里所有的家长,让他们 “全部都” 变成 “学爸” —— 这里的 “bà”不是 “霸王” 的 “霸”,而是 “爸妈” 的 “爸”;“学爸” 里的 “爸”,并不专指 “爸爸”,而是指 “父母” 或者 “爸妈”。
我说,我们花上一年的时间,投入起码 1,000 小时的注意力,我们每个人都能做到 “**起码比全中国所有的英语老师都强**” —— “比所有……都强”,你可能会觉得,这口气有点大了吧?然而,这完全不是夸张,而是确定可以做到的事情。
请问,我哪儿来的自信?
很简单啊,我不是厉害,也不是你天才,而是 “**人工智能真的很厉害**”…… 人工智能是天下最好的老师,谁与争锋?有了人工智能之后,我们不需要当老师,事实上也没必要与整个教师群体比较,我们需要做的只不过是 “**助教**” 而已……
> 人工智能是最好的 “老师”,我们是自己最好的 “助教”。
—— 这有什么问题吗?毫无疑问,**只要真的投入在一年时间里 1,000 小时的注意力,人人都能做到**。做不到的,肯定不是因为智商,肯定不是因为天分,肯定没有其它任何原因,只不过是 “做不到一年内投入 1,000 小时注意力” 而已 —— 完全没有任何其它的可能性。
我们只不过是想要 “做好一个合格的助教” 而已,这有什么做不到的么。做,就是了。不做,反正,这一年也会自顾自地流逝。
**自己做自己的助教,自己做自己的学生,并且,还有人工智能作为最好的老师** —— 这就是在这个 “人工智能时代” 里 “自学” 的真相。谁怕谁?
当然了,“说自己想说的话” —— 真的做起来,还是好说不好做…… 因为,绝大多数人终将意识到,自己所面临的最大问题竟然是 “**我好像没什么话要说……**” —— “不知道说什么” 远比 “不会说” 更为可怕。只不过,在此之前,这个真正可怕的问题还没浮现呢,就已经被各种其他肤浅的障碍挡住了而已。
那看似简单明了的 “启动任务” 中,真去做的话,就会发现,除了 “每天练三个小时” 之外,在此之前,“准备自己想说的话” 可能需要的时间比当初误以为的多很多 —— 毕竟,过往的许多年里,很多人上的都是假的 “语文课”…… 当然,也不是没有补救的方法 —— 有空可以去好好听听《李笑来的写作课》,并且还要多听几遍。

Some files were not shown because too many files have changed in this diff Show More